Feb 19 10:08:30 crc systemd[1]: Starting Kubernetes Kubelet... Feb 19 10:08:30 crc restorecon[4676]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:30 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 10:08:31 crc restorecon[4676]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 10:08:31 crc restorecon[4676]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 19 10:08:32 crc kubenswrapper[4811]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 10:08:32 crc kubenswrapper[4811]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 19 10:08:32 crc kubenswrapper[4811]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 10:08:32 crc kubenswrapper[4811]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 10:08:32 crc kubenswrapper[4811]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 19 10:08:32 crc kubenswrapper[4811]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.236141 4811 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243521 4811 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243542 4811 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243547 4811 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243551 4811 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243555 4811 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243559 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243562 4811 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243566 4811 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243570 4811 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243574 4811 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243578 4811 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243590 4811 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243594 4811 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243597 4811 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243601 4811 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243604 4811 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243608 4811 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243612 4811 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243616 4811 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243620 4811 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243624 4811 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243628 4811 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243632 4811 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243636 4811 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243641 4811 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243644 4811 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243649 4811 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243654 4811 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243658 4811 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243662 4811 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243666 4811 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243670 4811 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243673 4811 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243677 4811 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243681 4811 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243684 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243688 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243692 4811 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243695 4811 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243699 4811 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243703 4811 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243707 4811 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243712 4811 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243716 4811 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243720 4811 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243724 4811 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243728 4811 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243732 4811 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243735 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243739 4811 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243742 4811 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243746 4811 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243749 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243753 4811 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243757 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243760 4811 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243764 4811 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243767 4811 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243772 4811 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243775 4811 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243779 4811 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243782 4811 feature_gate.go:330] unrecognized feature gate: Example Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243786 4811 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243791 4811 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243795 4811 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243800 4811 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243803 4811 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243807 4811 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243811 4811 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243816 4811 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.243820 4811 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.243913 4811 flags.go:64] FLAG: --address="0.0.0.0" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.243922 4811 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.243929 4811 flags.go:64] FLAG: --anonymous-auth="true" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.243935 4811 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.243941 4811 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.243945 4811 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.243951 4811 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.243956 4811 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.243961 4811 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.243965 4811 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.243970 4811 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.243975 4811 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.243979 4811 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.243983 4811 flags.go:64] FLAG: --cgroup-root="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.243988 4811 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.243992 4811 flags.go:64] FLAG: --client-ca-file="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.243996 4811 flags.go:64] FLAG: --cloud-config="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244020 4811 flags.go:64] FLAG: --cloud-provider="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244025 4811 flags.go:64] FLAG: --cluster-dns="[]" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244030 4811 flags.go:64] FLAG: --cluster-domain="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244034 4811 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244038 4811 flags.go:64] FLAG: --config-dir="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244042 4811 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244047 4811 flags.go:64] FLAG: --container-log-max-files="5" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244056 4811 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244061 4811 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244065 4811 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244069 4811 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244074 4811 flags.go:64] FLAG: --contention-profiling="false" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244078 4811 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244082 4811 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244086 4811 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244090 4811 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244096 4811 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244101 4811 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244105 4811 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244109 4811 flags.go:64] FLAG: --enable-load-reader="false" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244113 4811 flags.go:64] FLAG: --enable-server="true" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244117 4811 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244123 4811 flags.go:64] FLAG: --event-burst="100" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244128 4811 flags.go:64] FLAG: --event-qps="50" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244132 4811 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244136 4811 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244140 4811 flags.go:64] FLAG: --eviction-hard="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244145 4811 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244149 4811 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244154 4811 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244158 4811 flags.go:64] FLAG: --eviction-soft="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244162 4811 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244167 4811 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244171 4811 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244175 4811 flags.go:64] FLAG: --experimental-mounter-path="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244179 4811 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244183 4811 flags.go:64] FLAG: --fail-swap-on="true" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244187 4811 flags.go:64] FLAG: --feature-gates="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244192 4811 flags.go:64] FLAG: --file-check-frequency="20s" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244196 4811 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244200 4811 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244205 4811 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244209 4811 flags.go:64] FLAG: --healthz-port="10248" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244213 4811 flags.go:64] FLAG: --help="false" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244217 4811 flags.go:64] FLAG: --hostname-override="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244221 4811 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244226 4811 flags.go:64] FLAG: --http-check-frequency="20s" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244230 4811 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244237 4811 flags.go:64] FLAG: --image-credential-provider-config="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244241 4811 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244245 4811 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244249 4811 flags.go:64] FLAG: --image-service-endpoint="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244253 4811 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244257 4811 flags.go:64] FLAG: --kube-api-burst="100" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244262 4811 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244266 4811 flags.go:64] FLAG: --kube-api-qps="50" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244270 4811 flags.go:64] FLAG: --kube-reserved="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244274 4811 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244278 4811 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244282 4811 flags.go:64] FLAG: --kubelet-cgroups="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244287 4811 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244292 4811 flags.go:64] FLAG: --lock-file="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244296 4811 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244300 4811 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244305 4811 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244311 4811 flags.go:64] FLAG: --log-json-split-stream="false" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244315 4811 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244319 4811 flags.go:64] FLAG: --log-text-split-stream="false" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244323 4811 flags.go:64] FLAG: --logging-format="text" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244327 4811 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244331 4811 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244335 4811 flags.go:64] FLAG: --manifest-url="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244339 4811 flags.go:64] FLAG: --manifest-url-header="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244344 4811 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244349 4811 flags.go:64] FLAG: --max-open-files="1000000" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244354 4811 flags.go:64] FLAG: --max-pods="110" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244358 4811 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244362 4811 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244366 4811 flags.go:64] FLAG: --memory-manager-policy="None" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244370 4811 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244375 4811 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244379 4811 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244383 4811 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244393 4811 flags.go:64] FLAG: --node-status-max-images="50" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244398 4811 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244402 4811 flags.go:64] FLAG: --oom-score-adj="-999" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244406 4811 flags.go:64] FLAG: --pod-cidr="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244410 4811 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244416 4811 flags.go:64] FLAG: --pod-manifest-path="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244420 4811 flags.go:64] FLAG: --pod-max-pids="-1" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244424 4811 flags.go:64] FLAG: --pods-per-core="0" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244428 4811 flags.go:64] FLAG: --port="10250" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244432 4811 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244437 4811 flags.go:64] FLAG: --provider-id="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244442 4811 flags.go:64] FLAG: --qos-reserved="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244465 4811 flags.go:64] FLAG: --read-only-port="10255" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244471 4811 flags.go:64] FLAG: --register-node="true" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244476 4811 flags.go:64] FLAG: --register-schedulable="true" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244481 4811 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244489 4811 flags.go:64] FLAG: --registry-burst="10" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244493 4811 flags.go:64] FLAG: --registry-qps="5" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244497 4811 flags.go:64] FLAG: --reserved-cpus="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244501 4811 flags.go:64] FLAG: --reserved-memory="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244506 4811 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244511 4811 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244515 4811 flags.go:64] FLAG: --rotate-certificates="false" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244519 4811 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244523 4811 flags.go:64] FLAG: --runonce="false" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244527 4811 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244531 4811 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244536 4811 flags.go:64] FLAG: --seccomp-default="false" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244539 4811 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244544 4811 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244548 4811 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244552 4811 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244557 4811 flags.go:64] FLAG: --storage-driver-password="root" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244561 4811 flags.go:64] FLAG: --storage-driver-secure="false" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244565 4811 flags.go:64] FLAG: --storage-driver-table="stats" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244569 4811 flags.go:64] FLAG: --storage-driver-user="root" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244573 4811 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244577 4811 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244581 4811 flags.go:64] FLAG: --system-cgroups="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244585 4811 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244592 4811 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244596 4811 flags.go:64] FLAG: --tls-cert-file="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244600 4811 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244605 4811 flags.go:64] FLAG: --tls-min-version="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244609 4811 flags.go:64] FLAG: --tls-private-key-file="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244613 4811 flags.go:64] FLAG: --topology-manager-policy="none" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244618 4811 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244622 4811 flags.go:64] FLAG: --topology-manager-scope="container" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244627 4811 flags.go:64] FLAG: --v="2" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244633 4811 flags.go:64] FLAG: --version="false" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244638 4811 flags.go:64] FLAG: --vmodule="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244643 4811 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.244647 4811 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244759 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244764 4811 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244768 4811 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244772 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244776 4811 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244779 4811 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244784 4811 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244788 4811 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244792 4811 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244796 4811 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244800 4811 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244805 4811 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244808 4811 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244813 4811 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244817 4811 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244821 4811 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244825 4811 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244829 4811 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244832 4811 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244836 4811 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244839 4811 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244843 4811 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244847 4811 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244850 4811 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244854 4811 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244857 4811 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244861 4811 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244864 4811 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244868 4811 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244871 4811 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244874 4811 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244879 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244882 4811 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244886 4811 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244889 4811 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244893 4811 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244896 4811 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244900 4811 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244904 4811 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244907 4811 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244911 4811 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244914 4811 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244918 4811 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244921 4811 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244925 4811 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244928 4811 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244931 4811 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244936 4811 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244940 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244944 4811 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244948 4811 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244952 4811 feature_gate.go:330] unrecognized feature gate: Example Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244957 4811 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244962 4811 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244966 4811 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244970 4811 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244974 4811 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244978 4811 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244981 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244986 4811 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244990 4811 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244995 4811 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.244998 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.245003 4811 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.245007 4811 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.245012 4811 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.245016 4811 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.245021 4811 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.245025 4811 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.245030 4811 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.245034 4811 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.249296 4811 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.263291 4811 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.263873 4811 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.263993 4811 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264037 4811 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264043 4811 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264049 4811 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264056 4811 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264063 4811 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264069 4811 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264074 4811 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264079 4811 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264086 4811 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264093 4811 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264099 4811 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264106 4811 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264113 4811 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264120 4811 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264127 4811 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264133 4811 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264139 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264146 4811 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264156 4811 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264167 4811 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264176 4811 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264185 4811 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264195 4811 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264204 4811 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264210 4811 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264216 4811 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264225 4811 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264233 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264239 4811 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264245 4811 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264253 4811 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264259 4811 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264266 4811 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264274 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264281 4811 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264287 4811 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264293 4811 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264299 4811 feature_gate.go:330] unrecognized feature gate: Example Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264304 4811 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264310 4811 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264316 4811 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264321 4811 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264327 4811 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264332 4811 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264337 4811 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264343 4811 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264349 4811 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264355 4811 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264360 4811 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264366 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264372 4811 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264377 4811 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264384 4811 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264390 4811 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264394 4811 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264401 4811 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264408 4811 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264414 4811 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264420 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264426 4811 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264432 4811 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264438 4811 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264443 4811 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264471 4811 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264476 4811 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264481 4811 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264487 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264492 4811 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264497 4811 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264503 4811 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.264513 4811 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264687 4811 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264699 4811 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264705 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264711 4811 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264717 4811 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264722 4811 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264727 4811 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264733 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264738 4811 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264743 4811 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264748 4811 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264753 4811 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264758 4811 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264765 4811 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264774 4811 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264780 4811 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264785 4811 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264790 4811 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264795 4811 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264801 4811 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264806 4811 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264811 4811 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264816 4811 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264820 4811 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264825 4811 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264830 4811 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264836 4811 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264841 4811 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264846 4811 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264851 4811 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264856 4811 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264861 4811 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264866 4811 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264871 4811 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264876 4811 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264881 4811 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264887 4811 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264892 4811 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264897 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264903 4811 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264908 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264913 4811 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264918 4811 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264924 4811 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264929 4811 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264936 4811 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264944 4811 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264950 4811 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264956 4811 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264961 4811 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264966 4811 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264971 4811 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264977 4811 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264983 4811 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264989 4811 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.264995 4811 feature_gate.go:330] unrecognized feature gate: Example Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.265001 4811 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.265006 4811 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.265011 4811 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.265016 4811 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.265021 4811 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.265027 4811 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.265033 4811 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.265038 4811 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.265043 4811 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.265048 4811 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.265053 4811 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.265060 4811 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.265065 4811 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.265071 4811 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.265078 4811 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.265087 4811 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.265374 4811 server.go:940] "Client rotation is on, will bootstrap in background" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.283811 4811 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.283975 4811 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.286241 4811 server.go:997] "Starting client certificate rotation" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.286281 4811 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.287803 4811 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-03 08:45:41.28268517 +0000 UTC Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.287994 4811 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.333047 4811 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.335071 4811 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 10:08:32 crc kubenswrapper[4811]: E0219 10:08:32.338106 4811 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.373553 4811 log.go:25] "Validated CRI v1 runtime API" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.439480 4811 log.go:25] "Validated CRI v1 image API" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.442564 4811 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.448783 4811 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-19-10-04-14-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.448864 4811 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.471303 4811 manager.go:217] Machine: {Timestamp:2026-02-19 10:08:32.468565578 +0000 UTC m=+0.995030284 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:0ae76a21-a2ee-4264-8d70-e6dc0f503661 BootID:aa043662-c579-48da-bd33-d995796f09b6 Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:f4:c0:bc Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:f4:c0:bc Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:85:c5:32 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:d1:04:23 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:55:dd:a2 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:1c:c3:2d Speed:-1 Mtu:1496} {Name:eth10 MacAddress:3a:a7:fb:b4:47:b8 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:6a:db:d2:42:68:80 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.471596 4811 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.471802 4811 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.472212 4811 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.472398 4811 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.472433 4811 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.472685 4811 topology_manager.go:138] "Creating topology manager with none policy" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.472698 4811 container_manager_linux.go:303] "Creating device plugin manager" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.473090 4811 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.473125 4811 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.473406 4811 state_mem.go:36] "Initialized new in-memory state store" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.473525 4811 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.487475 4811 kubelet.go:418] "Attempting to sync node with API server" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.487508 4811 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.487590 4811 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.487612 4811 kubelet.go:324] "Adding apiserver pod source" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.487628 4811 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.493334 4811 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.493339 4811 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Feb 19 10:08:32 crc kubenswrapper[4811]: E0219 10:08:32.493492 4811 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Feb 19 10:08:32 crc kubenswrapper[4811]: E0219 10:08:32.493529 4811 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.493901 4811 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.494798 4811 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.495980 4811 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.497597 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.497644 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.497658 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.497672 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.497693 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.497707 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.497720 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.497741 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.497758 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.497785 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.497810 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.497824 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.499985 4811 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.500866 4811 server.go:1280] "Started kubelet" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.501231 4811 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.502583 4811 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.502575 4811 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 19 10:08:32 crc systemd[1]: Started Kubernetes Kubelet. Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.503674 4811 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.504437 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.504505 4811 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.504705 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 17:55:40.417105677 +0000 UTC Feb 19 10:08:32 crc kubenswrapper[4811]: E0219 10:08:32.504771 4811 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.511873 4811 factory.go:55] Registering systemd factory Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.511908 4811 factory.go:221] Registration of the systemd container factory successfully Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.512572 4811 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.512620 4811 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.512803 4811 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 19 10:08:32 crc kubenswrapper[4811]: E0219 10:08:32.514295 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="200ms" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.514441 4811 factory.go:153] Registering CRI-O factory Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.514480 4811 factory.go:221] Registration of the crio container factory successfully Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.514565 4811 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.514591 4811 factory.go:103] Registering Raw factory Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.514612 4811 manager.go:1196] Started watching for new ooms in manager Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.514910 4811 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Feb 19 10:08:32 crc kubenswrapper[4811]: E0219 10:08:32.515050 4811 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.527615 4811 manager.go:319] Starting recovery of all containers Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.530910 4811 server.go:460] "Adding debug handlers to kubelet server" Feb 19 10:08:32 crc kubenswrapper[4811]: E0219 10:08:32.529847 4811 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.194:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18959df6f8bd31fc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 10:08:32.500822524 +0000 UTC m=+1.027287250,LastTimestamp:2026-02-19 10:08:32.500822524 +0000 UTC m=+1.027287250,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.531537 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.531610 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.531624 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.531638 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.531650 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.531664 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.531677 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.531689 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.531702 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.531722 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.531734 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.531747 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.531759 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.531774 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.531805 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.531818 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.531831 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.531843 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.531855 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.531870 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.531884 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.531896 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.531914 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.531926 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.531939 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.531951 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.531965 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.531979 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.532024 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.532037 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.532074 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.532087 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.532099 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.532111 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535276 4811 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535315 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535331 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535347 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535360 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535371 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535384 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535397 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535408 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535422 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535436 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535479 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535493 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535526 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535542 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535553 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535565 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535575 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535589 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535606 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535619 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535631 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535643 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535658 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535669 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535682 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535693 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535705 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535716 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535728 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535740 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535753 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535766 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535778 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535792 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535804 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535817 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535829 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535841 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535852 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535865 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535877 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535891 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535904 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535916 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535939 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.535967 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536021 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536041 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536059 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536076 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536092 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536108 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536126 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536144 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536162 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536176 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536189 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536223 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536252 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536265 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536278 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536292 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536305 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536317 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536331 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536344 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536355 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536371 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536388 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536406 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536436 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536490 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536505 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536519 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536532 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536545 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536561 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536574 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536590 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536602 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536619 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536632 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536644 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536659 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536672 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536684 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536697 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536711 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536723 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536737 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536750 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536761 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536774 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536788 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536808 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536821 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536834 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536847 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536858 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536872 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536884 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536896 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536915 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536945 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536959 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536975 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.536991 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537022 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537048 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537061 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537074 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537086 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537098 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537111 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537124 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537136 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537164 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537178 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537216 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537230 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537244 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537256 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537268 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537279 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537307 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537319 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537332 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537348 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537359 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537371 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537383 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537427 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537514 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537527 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537601 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537615 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537629 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537645 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537662 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537677 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537695 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537709 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537742 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537755 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537767 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537815 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537827 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537842 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537854 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537882 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537894 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537906 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537917 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537929 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537940 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537953 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537966 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537977 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.537989 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.538000 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.538011 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.538022 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.538035 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.538046 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.538057 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.538070 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.538080 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.538133 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.538145 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.538160 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.538171 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.538185 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.538199 4811 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.538210 4811 reconstruct.go:97] "Volume reconstruction finished" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.538220 4811 reconciler.go:26] "Reconciler: start to sync state" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.551908 4811 manager.go:324] Recovery completed Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.564772 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.566973 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.567008 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.567020 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.568370 4811 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.568395 4811 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.568416 4811 state_mem.go:36] "Initialized new in-memory state store" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.579082 4811 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.581786 4811 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.581854 4811 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.581892 4811 kubelet.go:2335] "Starting kubelet main sync loop" Feb 19 10:08:32 crc kubenswrapper[4811]: E0219 10:08:32.581980 4811 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 19 10:08:32 crc kubenswrapper[4811]: W0219 10:08:32.584206 4811 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Feb 19 10:08:32 crc kubenswrapper[4811]: E0219 10:08:32.584326 4811 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.599690 4811 policy_none.go:49] "None policy: Start" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.603387 4811 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.603426 4811 state_mem.go:35] "Initializing new in-memory state store" Feb 19 10:08:32 crc kubenswrapper[4811]: E0219 10:08:32.605771 4811 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.649899 4811 manager.go:334] "Starting Device Plugin manager" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.650120 4811 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.650133 4811 server.go:79] "Starting device plugin registration server" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.650573 4811 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.650587 4811 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.650764 4811 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.650911 4811 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.650929 4811 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 19 10:08:32 crc kubenswrapper[4811]: E0219 10:08:32.657829 4811 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.683122 4811 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.683281 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.684705 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.685044 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.685058 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.685225 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.685398 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.685441 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.686353 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.686382 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.686392 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.686354 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.686489 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.686503 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.686546 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.686700 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.686738 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.687792 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.687815 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.687823 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.687924 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.687941 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.687951 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.688097 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.688219 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.688259 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.688729 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.688746 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.688770 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.688860 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.688928 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.688960 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.689531 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.689553 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.689561 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.689661 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.689675 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.689683 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.689815 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.689840 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.689821 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.689925 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.689937 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.690484 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.690502 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.690527 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:32 crc kubenswrapper[4811]: E0219 10:08:32.715333 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="400ms" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.740941 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.740984 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.741016 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.741050 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.741089 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.741115 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.741137 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.741192 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.741216 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.741238 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.741288 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.741309 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.741340 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.741433 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.741529 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.751712 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.752857 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.752891 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.752900 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.752943 4811 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 10:08:32 crc kubenswrapper[4811]: E0219 10:08:32.753510 4811 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.842398 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.842695 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.842745 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.842891 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.843043 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.843108 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.843142 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.843189 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.843003 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.843238 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.842966 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.843311 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.843658 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.843700 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.843742 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.843373 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.843798 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.843491 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.843839 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.843874 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.843831 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.843908 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.843921 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.843945 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.844058 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.844122 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.844080 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.844130 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.844191 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.844237 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.954037 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.955613 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.955656 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.955672 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:32 crc kubenswrapper[4811]: I0219 10:08:32.955704 4811 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 10:08:32 crc kubenswrapper[4811]: E0219 10:08:32.956336 4811 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Feb 19 10:08:33 crc kubenswrapper[4811]: I0219 10:08:33.008120 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 10:08:33 crc kubenswrapper[4811]: I0219 10:08:33.033412 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 19 10:08:33 crc kubenswrapper[4811]: I0219 10:08:33.045108 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 10:08:33 crc kubenswrapper[4811]: I0219 10:08:33.065512 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:08:33 crc kubenswrapper[4811]: I0219 10:08:33.071749 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 10:08:33 crc kubenswrapper[4811]: E0219 10:08:33.117161 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="800ms" Feb 19 10:08:33 crc kubenswrapper[4811]: W0219 10:08:33.184846 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-1e012a127f547eeab4343506da2f823aba95d472a8feee73f3954c9662008165 WatchSource:0}: Error finding container 1e012a127f547eeab4343506da2f823aba95d472a8feee73f3954c9662008165: Status 404 returned error can't find the container with id 1e012a127f547eeab4343506da2f823aba95d472a8feee73f3954c9662008165 Feb 19 10:08:33 crc kubenswrapper[4811]: W0219 10:08:33.187702 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-9475ff64ad2850cfd91cd575e9f054d0c801731c85f1609403e7fb7cff24a667 WatchSource:0}: Error finding container 9475ff64ad2850cfd91cd575e9f054d0c801731c85f1609403e7fb7cff24a667: Status 404 returned error can't find the container with id 9475ff64ad2850cfd91cd575e9f054d0c801731c85f1609403e7fb7cff24a667 Feb 19 10:08:33 crc kubenswrapper[4811]: I0219 10:08:33.356865 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:33 crc kubenswrapper[4811]: I0219 10:08:33.358217 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:33 crc kubenswrapper[4811]: I0219 10:08:33.358264 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:33 crc kubenswrapper[4811]: I0219 10:08:33.358276 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:33 crc kubenswrapper[4811]: I0219 10:08:33.358309 4811 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 10:08:33 crc kubenswrapper[4811]: E0219 10:08:33.358685 4811 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Feb 19 10:08:33 crc kubenswrapper[4811]: I0219 10:08:33.504507 4811 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Feb 19 10:08:33 crc kubenswrapper[4811]: I0219 10:08:33.505477 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 19:23:50.296799686 +0000 UTC Feb 19 10:08:33 crc kubenswrapper[4811]: I0219 10:08:33.587575 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9475ff64ad2850cfd91cd575e9f054d0c801731c85f1609403e7fb7cff24a667"} Feb 19 10:08:33 crc kubenswrapper[4811]: I0219 10:08:33.589439 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"b98be1039e819c0cf783793f7a62e00f33a03a1dc708f37245e2fc1f194989eb"} Feb 19 10:08:33 crc kubenswrapper[4811]: I0219 10:08:33.590867 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1e012a127f547eeab4343506da2f823aba95d472a8feee73f3954c9662008165"} Feb 19 10:08:33 crc kubenswrapper[4811]: I0219 10:08:33.591995 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"de01b82c71c41eb3c3d6c86fab06edac9a8dea430a41c6c774d2c4a34acbf3d8"} Feb 19 10:08:33 crc kubenswrapper[4811]: I0219 10:08:33.593174 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"680e2a48b6bb59e767bb5639e7681b53e8378f3f0082a136c5594e1cec3ef736"} Feb 19 10:08:33 crc kubenswrapper[4811]: W0219 10:08:33.795506 4811 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Feb 19 10:08:33 crc kubenswrapper[4811]: E0219 10:08:33.795632 4811 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Feb 19 10:08:33 crc kubenswrapper[4811]: W0219 10:08:33.807963 4811 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Feb 19 10:08:33 crc kubenswrapper[4811]: E0219 10:08:33.808091 4811 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Feb 19 10:08:33 crc kubenswrapper[4811]: W0219 10:08:33.901853 4811 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Feb 19 10:08:33 crc kubenswrapper[4811]: E0219 10:08:33.901986 4811 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Feb 19 10:08:33 crc kubenswrapper[4811]: E0219 10:08:33.918372 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="1.6s" Feb 19 10:08:34 crc kubenswrapper[4811]: W0219 10:08:34.091773 4811 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Feb 19 10:08:34 crc kubenswrapper[4811]: E0219 10:08:34.091867 4811 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Feb 19 10:08:34 crc kubenswrapper[4811]: I0219 10:08:34.159137 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:34 crc kubenswrapper[4811]: I0219 10:08:34.160512 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:34 crc kubenswrapper[4811]: I0219 10:08:34.160560 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:34 crc kubenswrapper[4811]: I0219 10:08:34.160572 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:34 crc kubenswrapper[4811]: I0219 10:08:34.160607 4811 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 10:08:34 crc kubenswrapper[4811]: E0219 10:08:34.161134 4811 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Feb 19 10:08:34 crc kubenswrapper[4811]: I0219 10:08:34.359423 4811 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 10:08:34 crc kubenswrapper[4811]: E0219 10:08:34.361048 4811 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Feb 19 10:08:34 crc kubenswrapper[4811]: I0219 10:08:34.504574 4811 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Feb 19 10:08:34 crc kubenswrapper[4811]: I0219 10:08:34.506535 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 14:28:18.240276744 +0000 UTC Feb 19 10:08:34 crc kubenswrapper[4811]: I0219 10:08:34.599601 4811 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454" exitCode=0 Feb 19 10:08:34 crc kubenswrapper[4811]: I0219 10:08:34.599757 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:34 crc kubenswrapper[4811]: I0219 10:08:34.599718 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454"} Feb 19 10:08:34 crc kubenswrapper[4811]: I0219 10:08:34.601789 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:34 crc kubenswrapper[4811]: I0219 10:08:34.601853 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:34 crc kubenswrapper[4811]: I0219 10:08:34.601876 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:34 crc kubenswrapper[4811]: I0219 10:08:34.604224 4811 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="da14e1ea932ad641416cf7d2d3702ac5fecbee71b67ed85ad5dc9e17d6ba067b" exitCode=0 Feb 19 10:08:34 crc kubenswrapper[4811]: I0219 10:08:34.604324 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:34 crc kubenswrapper[4811]: I0219 10:08:34.604360 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"da14e1ea932ad641416cf7d2d3702ac5fecbee71b67ed85ad5dc9e17d6ba067b"} Feb 19 10:08:34 crc kubenswrapper[4811]: I0219 10:08:34.604503 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:34 crc kubenswrapper[4811]: I0219 10:08:34.606590 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:34 crc kubenswrapper[4811]: I0219 10:08:34.606660 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:34 crc kubenswrapper[4811]: I0219 10:08:34.606687 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:34 crc kubenswrapper[4811]: I0219 10:08:34.607034 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:34 crc kubenswrapper[4811]: I0219 10:08:34.607088 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:34 crc kubenswrapper[4811]: I0219 10:08:34.607102 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:34 crc kubenswrapper[4811]: I0219 10:08:34.610858 4811 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ff5b9193bf2fbd5afbe0e118cfc3702516f2bd07f298ea8b41de72341672f131" exitCode=0 Feb 19 10:08:34 crc kubenswrapper[4811]: I0219 10:08:34.611181 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:34 crc kubenswrapper[4811]: I0219 10:08:34.611998 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ff5b9193bf2fbd5afbe0e118cfc3702516f2bd07f298ea8b41de72341672f131"} Feb 19 10:08:34 crc kubenswrapper[4811]: I0219 10:08:34.619172 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:34 crc kubenswrapper[4811]: I0219 10:08:34.619238 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:34 crc kubenswrapper[4811]: I0219 10:08:34.619264 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:34 crc kubenswrapper[4811]: I0219 10:08:34.621007 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620"} Feb 19 10:08:34 crc kubenswrapper[4811]: I0219 10:08:34.623408 4811 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="c42e1697a703d581d273e5fba5d8108164f03f6393d943b98c6c1f95b834b962" exitCode=0 Feb 19 10:08:34 crc kubenswrapper[4811]: I0219 10:08:34.623527 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"c42e1697a703d581d273e5fba5d8108164f03f6393d943b98c6c1f95b834b962"} Feb 19 10:08:34 crc kubenswrapper[4811]: I0219 10:08:34.623638 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:34 crc kubenswrapper[4811]: I0219 10:08:34.624762 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:34 crc kubenswrapper[4811]: I0219 10:08:34.624807 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:34 crc kubenswrapper[4811]: I0219 10:08:34.624823 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:35 crc kubenswrapper[4811]: I0219 10:08:35.503480 4811 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Feb 19 10:08:35 crc kubenswrapper[4811]: I0219 10:08:35.506658 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 04:53:20.963741072 +0000 UTC Feb 19 10:08:35 crc kubenswrapper[4811]: E0219 10:08:35.518923 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="3.2s" Feb 19 10:08:35 crc kubenswrapper[4811]: I0219 10:08:35.633332 4811 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="dbc068084503de52cc2bbf9b09ae198d7614e12337c4bb717e9db2cfdb9fa6ed" exitCode=0 Feb 19 10:08:35 crc kubenswrapper[4811]: I0219 10:08:35.633402 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"dbc068084503de52cc2bbf9b09ae198d7614e12337c4bb717e9db2cfdb9fa6ed"} Feb 19 10:08:35 crc kubenswrapper[4811]: I0219 10:08:35.633551 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:35 crc kubenswrapper[4811]: I0219 10:08:35.634763 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:35 crc kubenswrapper[4811]: I0219 10:08:35.634789 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:35 crc kubenswrapper[4811]: I0219 10:08:35.634798 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:35 crc kubenswrapper[4811]: I0219 10:08:35.638044 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78"} Feb 19 10:08:35 crc kubenswrapper[4811]: I0219 10:08:35.638072 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:35 crc kubenswrapper[4811]: I0219 10:08:35.638086 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88"} Feb 19 10:08:35 crc kubenswrapper[4811]: I0219 10:08:35.638246 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937"} Feb 19 10:08:35 crc kubenswrapper[4811]: I0219 10:08:35.638824 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:35 crc kubenswrapper[4811]: I0219 10:08:35.638846 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:35 crc kubenswrapper[4811]: I0219 10:08:35.638854 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:35 crc kubenswrapper[4811]: I0219 10:08:35.641023 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"0d07f52d0313d478ada71ba0ef08d6fe85e6dc1c7636b40aec2ba113b388739f"} Feb 19 10:08:35 crc kubenswrapper[4811]: I0219 10:08:35.641093 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:35 crc kubenswrapper[4811]: I0219 10:08:35.642745 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:35 crc kubenswrapper[4811]: I0219 10:08:35.642774 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:35 crc kubenswrapper[4811]: I0219 10:08:35.642786 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:35 crc kubenswrapper[4811]: I0219 10:08:35.645287 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0"} Feb 19 10:08:35 crc kubenswrapper[4811]: I0219 10:08:35.645315 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6"} Feb 19 10:08:35 crc kubenswrapper[4811]: I0219 10:08:35.645326 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53"} Feb 19 10:08:35 crc kubenswrapper[4811]: I0219 10:08:35.647201 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b78db08599819faea15066929a2406d58aeb5abf2ec530238ac08f02cb937111"} Feb 19 10:08:35 crc kubenswrapper[4811]: I0219 10:08:35.647224 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f98f3b7146dd32290f30820bdf650e17aa3147f6fa4aedce1a59301c70bc7070"} Feb 19 10:08:35 crc kubenswrapper[4811]: I0219 10:08:35.647233 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"820be18f53ce1e3de635aae2c235a7522e55358347ad227db441688e38eff7a9"} Feb 19 10:08:35 crc kubenswrapper[4811]: I0219 10:08:35.647313 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:35 crc kubenswrapper[4811]: I0219 10:08:35.655184 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:35 crc kubenswrapper[4811]: I0219 10:08:35.655268 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:35 crc kubenswrapper[4811]: I0219 10:08:35.655282 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:35 crc kubenswrapper[4811]: I0219 10:08:35.762288 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:35 crc kubenswrapper[4811]: I0219 10:08:35.763942 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:35 crc kubenswrapper[4811]: I0219 10:08:35.764007 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:35 crc kubenswrapper[4811]: I0219 10:08:35.764026 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:35 crc kubenswrapper[4811]: I0219 10:08:35.764064 4811 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 10:08:35 crc kubenswrapper[4811]: E0219 10:08:35.764743 4811 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Feb 19 10:08:36 crc kubenswrapper[4811]: W0219 10:08:36.245279 4811 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Feb 19 10:08:36 crc kubenswrapper[4811]: E0219 10:08:36.245382 4811 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Feb 19 10:08:36 crc kubenswrapper[4811]: I0219 10:08:36.504057 4811 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Feb 19 10:08:36 crc kubenswrapper[4811]: W0219 10:08:36.504818 4811 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Feb 19 10:08:36 crc kubenswrapper[4811]: E0219 10:08:36.504967 4811 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Feb 19 10:08:36 crc kubenswrapper[4811]: I0219 10:08:36.507348 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 11:27:54.135504691 +0000 UTC Feb 19 10:08:36 crc kubenswrapper[4811]: W0219 10:08:36.561684 4811 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Feb 19 10:08:36 crc kubenswrapper[4811]: E0219 10:08:36.561803 4811 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Feb 19 10:08:36 crc kubenswrapper[4811]: I0219 10:08:36.653244 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"097b65ba9cbb94e97a9104f4bbaff80f560a22e0673d99158b5f86073aed8f9d"} Feb 19 10:08:36 crc kubenswrapper[4811]: I0219 10:08:36.653312 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692"} Feb 19 10:08:36 crc kubenswrapper[4811]: I0219 10:08:36.653323 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:36 crc kubenswrapper[4811]: I0219 10:08:36.654386 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:36 crc kubenswrapper[4811]: I0219 10:08:36.654418 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:36 crc kubenswrapper[4811]: I0219 10:08:36.654428 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:36 crc kubenswrapper[4811]: I0219 10:08:36.655322 4811 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="9f481ddc080d1ce7673f2ac51532c61c3643d7512b38ae579614fca73a582519" exitCode=0 Feb 19 10:08:36 crc kubenswrapper[4811]: I0219 10:08:36.655348 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"9f481ddc080d1ce7673f2ac51532c61c3643d7512b38ae579614fca73a582519"} Feb 19 10:08:36 crc kubenswrapper[4811]: I0219 10:08:36.655511 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:36 crc kubenswrapper[4811]: I0219 10:08:36.655549 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:36 crc kubenswrapper[4811]: I0219 10:08:36.655562 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:36 crc kubenswrapper[4811]: I0219 10:08:36.655740 4811 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 10:08:36 crc kubenswrapper[4811]: I0219 10:08:36.655874 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:36 crc kubenswrapper[4811]: I0219 10:08:36.656800 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:36 crc kubenswrapper[4811]: I0219 10:08:36.656912 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:36 crc kubenswrapper[4811]: I0219 10:08:36.656998 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:36 crc kubenswrapper[4811]: I0219 10:08:36.656870 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:36 crc kubenswrapper[4811]: I0219 10:08:36.657121 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:36 crc kubenswrapper[4811]: I0219 10:08:36.657137 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:36 crc kubenswrapper[4811]: I0219 10:08:36.656830 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:36 crc kubenswrapper[4811]: I0219 10:08:36.656812 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:36 crc kubenswrapper[4811]: I0219 10:08:36.657225 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:36 crc kubenswrapper[4811]: I0219 10:08:36.657238 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:36 crc kubenswrapper[4811]: I0219 10:08:36.657201 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:36 crc kubenswrapper[4811]: I0219 10:08:36.657280 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:36 crc kubenswrapper[4811]: W0219 10:08:36.931187 4811 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Feb 19 10:08:36 crc kubenswrapper[4811]: E0219 10:08:36.931311 4811 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Feb 19 10:08:37 crc kubenswrapper[4811]: I0219 10:08:37.236013 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 10:08:37 crc kubenswrapper[4811]: I0219 10:08:37.503888 4811 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Feb 19 10:08:37 crc kubenswrapper[4811]: I0219 10:08:37.508044 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 16:46:53.152579838 +0000 UTC Feb 19 10:08:37 crc kubenswrapper[4811]: I0219 10:08:37.661079 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 10:08:37 crc kubenswrapper[4811]: I0219 10:08:37.663951 4811 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="097b65ba9cbb94e97a9104f4bbaff80f560a22e0673d99158b5f86073aed8f9d" exitCode=255 Feb 19 10:08:37 crc kubenswrapper[4811]: I0219 10:08:37.664041 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"097b65ba9cbb94e97a9104f4bbaff80f560a22e0673d99158b5f86073aed8f9d"} Feb 19 10:08:37 crc kubenswrapper[4811]: I0219 10:08:37.664184 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:37 crc kubenswrapper[4811]: I0219 10:08:37.665810 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:37 crc kubenswrapper[4811]: I0219 10:08:37.665870 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:37 crc kubenswrapper[4811]: I0219 10:08:37.665882 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:37 crc kubenswrapper[4811]: I0219 10:08:37.666552 4811 scope.go:117] "RemoveContainer" containerID="097b65ba9cbb94e97a9104f4bbaff80f560a22e0673d99158b5f86073aed8f9d" Feb 19 10:08:37 crc kubenswrapper[4811]: I0219 10:08:37.671832 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:37 crc kubenswrapper[4811]: I0219 10:08:37.675754 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3d5af90ca5dda7d7b59bf1808d4b09f46473cb8b78618e2edec69487a6bd21b4"} Feb 19 10:08:37 crc kubenswrapper[4811]: I0219 10:08:37.675841 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7c47fb335db8da19853d68f48f38efe54c18cbac647d8b7b82e158c1a919a841"} Feb 19 10:08:37 crc kubenswrapper[4811]: I0219 10:08:37.675864 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"11cc6a380f3286a7bdfaad33f20795014e59e2fc2fbabaa6687fdfaaca57ef5c"} Feb 19 10:08:37 crc kubenswrapper[4811]: I0219 10:08:37.675880 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6e83eeac4a8caa30e0c0e866a4f10b66f890e718a31bc71dc389e9513c3bfb0b"} Feb 19 10:08:37 crc kubenswrapper[4811]: I0219 10:08:37.681584 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:37 crc kubenswrapper[4811]: I0219 10:08:37.681744 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:37 crc kubenswrapper[4811]: I0219 10:08:37.681775 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:37 crc kubenswrapper[4811]: I0219 10:08:37.783482 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:08:38 crc kubenswrapper[4811]: I0219 10:08:38.508436 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 08:11:19.279285109 +0000 UTC Feb 19 10:08:38 crc kubenswrapper[4811]: I0219 10:08:38.659187 4811 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 10:08:38 crc kubenswrapper[4811]: I0219 10:08:38.679215 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 10:08:38 crc kubenswrapper[4811]: I0219 10:08:38.683253 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a"} Feb 19 10:08:38 crc kubenswrapper[4811]: I0219 10:08:38.683513 4811 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 10:08:38 crc kubenswrapper[4811]: I0219 10:08:38.683690 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:38 crc kubenswrapper[4811]: I0219 10:08:38.685436 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:38 crc kubenswrapper[4811]: I0219 10:08:38.685494 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:38 crc kubenswrapper[4811]: I0219 10:08:38.685503 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:38 crc kubenswrapper[4811]: I0219 10:08:38.689974 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6d7b047eebdb07af9e407ac9659310472d7ae70f9a7c85236e5e8b2f7181150d"} Feb 19 10:08:38 crc kubenswrapper[4811]: I0219 10:08:38.690168 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:38 crc kubenswrapper[4811]: I0219 10:08:38.691438 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:38 crc kubenswrapper[4811]: I0219 10:08:38.691630 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:38 crc kubenswrapper[4811]: I0219 10:08:38.691663 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:38 crc kubenswrapper[4811]: I0219 10:08:38.844918 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:08:38 crc kubenswrapper[4811]: I0219 10:08:38.965901 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:38 crc kubenswrapper[4811]: I0219 10:08:38.967195 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:38 crc kubenswrapper[4811]: I0219 10:08:38.967224 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:38 crc kubenswrapper[4811]: I0219 10:08:38.967232 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:38 crc kubenswrapper[4811]: I0219 10:08:38.967254 4811 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 10:08:39 crc kubenswrapper[4811]: I0219 10:08:39.509373 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 14:31:56.864408303 +0000 UTC Feb 19 10:08:39 crc kubenswrapper[4811]: I0219 10:08:39.692983 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:39 crc kubenswrapper[4811]: I0219 10:08:39.693117 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:39 crc kubenswrapper[4811]: I0219 10:08:39.694225 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:39 crc kubenswrapper[4811]: I0219 10:08:39.694257 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:39 crc kubenswrapper[4811]: I0219 10:08:39.694270 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:39 crc kubenswrapper[4811]: I0219 10:08:39.694921 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:39 crc kubenswrapper[4811]: I0219 10:08:39.694971 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:39 crc kubenswrapper[4811]: I0219 10:08:39.694994 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:39 crc kubenswrapper[4811]: I0219 10:08:39.819914 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 10:08:39 crc kubenswrapper[4811]: I0219 10:08:39.820131 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:39 crc kubenswrapper[4811]: I0219 10:08:39.821582 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:39 crc kubenswrapper[4811]: I0219 10:08:39.821634 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:39 crc kubenswrapper[4811]: I0219 10:08:39.821655 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:40 crc kubenswrapper[4811]: I0219 10:08:40.474645 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:08:40 crc kubenswrapper[4811]: I0219 10:08:40.510130 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 00:43:05.224195307 +0000 UTC Feb 19 10:08:40 crc kubenswrapper[4811]: I0219 10:08:40.601789 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 10:08:40 crc kubenswrapper[4811]: I0219 10:08:40.697763 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:40 crc kubenswrapper[4811]: I0219 10:08:40.697936 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:40 crc kubenswrapper[4811]: I0219 10:08:40.700488 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:40 crc kubenswrapper[4811]: I0219 10:08:40.700553 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:40 crc kubenswrapper[4811]: I0219 10:08:40.700573 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:40 crc kubenswrapper[4811]: I0219 10:08:40.700564 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:40 crc kubenswrapper[4811]: I0219 10:08:40.700663 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:40 crc kubenswrapper[4811]: I0219 10:08:40.700710 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:41 crc kubenswrapper[4811]: I0219 10:08:41.511111 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 13:58:36.562457911 +0000 UTC Feb 19 10:08:41 crc kubenswrapper[4811]: I0219 10:08:41.702013 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:41 crc kubenswrapper[4811]: I0219 10:08:41.703618 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:41 crc kubenswrapper[4811]: I0219 10:08:41.703692 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:41 crc kubenswrapper[4811]: I0219 10:08:41.703716 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:42 crc kubenswrapper[4811]: I0219 10:08:42.512201 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 03:05:18.381568901 +0000 UTC Feb 19 10:08:42 crc kubenswrapper[4811]: E0219 10:08:42.658027 4811 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 10:08:42 crc kubenswrapper[4811]: I0219 10:08:42.798544 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 19 10:08:42 crc kubenswrapper[4811]: I0219 10:08:42.798850 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:42 crc kubenswrapper[4811]: I0219 10:08:42.800327 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:42 crc kubenswrapper[4811]: I0219 10:08:42.800364 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:42 crc kubenswrapper[4811]: I0219 10:08:42.800374 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:43 crc kubenswrapper[4811]: I0219 10:08:43.512530 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 01:29:21.325030866 +0000 UTC Feb 19 10:08:44 crc kubenswrapper[4811]: I0219 10:08:44.437568 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 10:08:44 crc kubenswrapper[4811]: I0219 10:08:44.437842 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:44 crc kubenswrapper[4811]: I0219 10:08:44.439809 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:44 crc kubenswrapper[4811]: I0219 10:08:44.439860 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:44 crc kubenswrapper[4811]: I0219 10:08:44.439876 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:44 crc kubenswrapper[4811]: I0219 10:08:44.513549 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 00:03:05.33229109 +0000 UTC Feb 19 10:08:44 crc kubenswrapper[4811]: I0219 10:08:44.980124 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 10:08:44 crc kubenswrapper[4811]: I0219 10:08:44.980398 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:44 crc kubenswrapper[4811]: I0219 10:08:44.982178 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:44 crc kubenswrapper[4811]: I0219 10:08:44.982228 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:44 crc kubenswrapper[4811]: I0219 10:08:44.982238 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:44 crc kubenswrapper[4811]: I0219 10:08:44.984779 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 10:08:45 crc kubenswrapper[4811]: I0219 10:08:45.514567 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 02:57:04.921152396 +0000 UTC Feb 19 10:08:45 crc kubenswrapper[4811]: I0219 10:08:45.713998 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:45 crc kubenswrapper[4811]: I0219 10:08:45.715263 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:45 crc kubenswrapper[4811]: I0219 10:08:45.715330 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:45 crc kubenswrapper[4811]: I0219 10:08:45.715398 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:45 crc kubenswrapper[4811]: I0219 10:08:45.720525 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 10:08:46 crc kubenswrapper[4811]: I0219 10:08:46.515585 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 22:33:21.600066447 +0000 UTC Feb 19 10:08:46 crc kubenswrapper[4811]: I0219 10:08:46.716929 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:46 crc kubenswrapper[4811]: I0219 10:08:46.718162 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:46 crc kubenswrapper[4811]: I0219 10:08:46.718212 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:46 crc kubenswrapper[4811]: I0219 10:08:46.718223 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:47 crc kubenswrapper[4811]: I0219 10:08:47.026422 4811 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 19 10:08:47 crc kubenswrapper[4811]: I0219 10:08:47.026626 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 19 10:08:47 crc kubenswrapper[4811]: I0219 10:08:47.438321 4811 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 10:08:47 crc kubenswrapper[4811]: I0219 10:08:47.438441 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 10:08:47 crc kubenswrapper[4811]: I0219 10:08:47.516365 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 11:36:07.437616759 +0000 UTC Feb 19 10:08:47 crc kubenswrapper[4811]: I0219 10:08:47.594287 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 19 10:08:47 crc kubenswrapper[4811]: I0219 10:08:47.594577 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:47 crc kubenswrapper[4811]: I0219 10:08:47.595872 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:47 crc kubenswrapper[4811]: I0219 10:08:47.595922 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:47 crc kubenswrapper[4811]: I0219 10:08:47.595939 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:47 crc kubenswrapper[4811]: I0219 10:08:47.647018 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 19 10:08:47 crc kubenswrapper[4811]: I0219 10:08:47.719994 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:47 crc kubenswrapper[4811]: I0219 10:08:47.721760 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:47 crc kubenswrapper[4811]: I0219 10:08:47.721832 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:47 crc kubenswrapper[4811]: I0219 10:08:47.721853 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:47 crc kubenswrapper[4811]: I0219 10:08:47.740983 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 19 10:08:48 crc kubenswrapper[4811]: I0219 10:08:48.503563 4811 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 19 10:08:48 crc kubenswrapper[4811]: I0219 10:08:48.516720 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 21:07:14.458279369 +0000 UTC Feb 19 10:08:48 crc kubenswrapper[4811]: E0219 10:08:48.661100 4811 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 19 10:08:48 crc kubenswrapper[4811]: E0219 10:08:48.720288 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Feb 19 10:08:48 crc kubenswrapper[4811]: I0219 10:08:48.819806 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:48 crc kubenswrapper[4811]: I0219 10:08:48.821037 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:48 crc kubenswrapper[4811]: I0219 10:08:48.821105 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:48 crc kubenswrapper[4811]: I0219 10:08:48.821118 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:48 crc kubenswrapper[4811]: E0219 10:08:48.969030 4811 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Feb 19 10:08:49 crc kubenswrapper[4811]: I0219 10:08:49.517753 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 15:38:32.001186656 +0000 UTC Feb 19 10:08:49 crc kubenswrapper[4811]: E0219 10:08:49.542556 4811 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.18959df6f8bd31fc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 10:08:32.500822524 +0000 UTC m=+1.027287250,LastTimestamp:2026-02-19 10:08:32.500822524 +0000 UTC m=+1.027287250,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 10:08:49 crc kubenswrapper[4811]: I0219 10:08:49.804426 4811 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 19 10:08:49 crc kubenswrapper[4811]: I0219 10:08:49.805373 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 19 10:08:49 crc kubenswrapper[4811]: I0219 10:08:49.819219 4811 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 19 10:08:49 crc kubenswrapper[4811]: I0219 10:08:49.819313 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 19 10:08:50 crc kubenswrapper[4811]: I0219 10:08:50.482729 4811 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 19 10:08:50 crc kubenswrapper[4811]: [+]log ok Feb 19 10:08:50 crc kubenswrapper[4811]: [+]etcd ok Feb 19 10:08:50 crc kubenswrapper[4811]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 19 10:08:50 crc kubenswrapper[4811]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 19 10:08:50 crc kubenswrapper[4811]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 19 10:08:50 crc kubenswrapper[4811]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 19 10:08:50 crc kubenswrapper[4811]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 19 10:08:50 crc kubenswrapper[4811]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 19 10:08:50 crc kubenswrapper[4811]: [+]poststarthook/generic-apiserver-start-informers ok Feb 19 10:08:50 crc kubenswrapper[4811]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 19 10:08:50 crc kubenswrapper[4811]: [+]poststarthook/priority-and-fairness-filter ok Feb 19 10:08:50 crc kubenswrapper[4811]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 19 10:08:50 crc kubenswrapper[4811]: [+]poststarthook/start-apiextensions-informers ok Feb 19 10:08:50 crc kubenswrapper[4811]: [+]poststarthook/start-apiextensions-controllers ok Feb 19 10:08:50 crc kubenswrapper[4811]: [+]poststarthook/crd-informer-synced ok Feb 19 10:08:50 crc kubenswrapper[4811]: [+]poststarthook/start-system-namespaces-controller ok Feb 19 10:08:50 crc kubenswrapper[4811]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 19 10:08:50 crc kubenswrapper[4811]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 19 10:08:50 crc kubenswrapper[4811]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 19 10:08:50 crc kubenswrapper[4811]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 19 10:08:50 crc kubenswrapper[4811]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 19 10:08:50 crc kubenswrapper[4811]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 19 10:08:50 crc kubenswrapper[4811]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 19 10:08:50 crc kubenswrapper[4811]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 19 10:08:50 crc kubenswrapper[4811]: [+]poststarthook/bootstrap-controller ok Feb 19 10:08:50 crc kubenswrapper[4811]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 19 10:08:50 crc kubenswrapper[4811]: [+]poststarthook/start-kube-aggregator-informers ok Feb 19 10:08:50 crc kubenswrapper[4811]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 19 10:08:50 crc kubenswrapper[4811]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 19 10:08:50 crc kubenswrapper[4811]: [+]poststarthook/apiservice-registration-controller ok Feb 19 10:08:50 crc kubenswrapper[4811]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 19 10:08:50 crc kubenswrapper[4811]: [+]poststarthook/apiservice-discovery-controller ok Feb 19 10:08:50 crc kubenswrapper[4811]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 19 10:08:50 crc kubenswrapper[4811]: [+]autoregister-completion ok Feb 19 10:08:50 crc kubenswrapper[4811]: [+]poststarthook/apiservice-openapi-controller ok Feb 19 10:08:50 crc kubenswrapper[4811]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 19 10:08:50 crc kubenswrapper[4811]: livez check failed Feb 19 10:08:50 crc kubenswrapper[4811]: I0219 10:08:50.482822 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 10:08:50 crc kubenswrapper[4811]: I0219 10:08:50.518256 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 05:50:05.845291921 +0000 UTC Feb 19 10:08:51 crc kubenswrapper[4811]: I0219 10:08:51.518608 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 05:21:57.767540892 +0000 UTC Feb 19 10:08:52 crc kubenswrapper[4811]: I0219 10:08:52.519649 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 21:28:04.839942778 +0000 UTC Feb 19 10:08:52 crc kubenswrapper[4811]: E0219 10:08:52.658193 4811 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 10:08:53 crc kubenswrapper[4811]: I0219 10:08:53.520428 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 13:32:08.757908126 +0000 UTC Feb 19 10:08:54 crc kubenswrapper[4811]: I0219 10:08:54.520572 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 22:53:41.16148566 +0000 UTC Feb 19 10:08:54 crc kubenswrapper[4811]: I0219 10:08:54.820219 4811 trace.go:236] Trace[482277622]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 10:08:39.811) (total time: 15008ms): Feb 19 10:08:54 crc kubenswrapper[4811]: Trace[482277622]: ---"Objects listed" error: 15008ms (10:08:54.820) Feb 19 10:08:54 crc kubenswrapper[4811]: Trace[482277622]: [15.008596475s] [15.008596475s] END Feb 19 10:08:54 crc kubenswrapper[4811]: I0219 10:08:54.820259 4811 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 19 10:08:54 crc kubenswrapper[4811]: I0219 10:08:54.870612 4811 trace.go:236] Trace[984977797]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 10:08:39.850) (total time: 15020ms): Feb 19 10:08:54 crc kubenswrapper[4811]: Trace[984977797]: ---"Objects listed" error: 15020ms (10:08:54.870) Feb 19 10:08:54 crc kubenswrapper[4811]: Trace[984977797]: [15.020063382s] [15.020063382s] END Feb 19 10:08:54 crc kubenswrapper[4811]: I0219 10:08:54.870650 4811 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 19 10:08:54 crc kubenswrapper[4811]: I0219 10:08:54.873117 4811 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 19 10:08:54 crc kubenswrapper[4811]: I0219 10:08:54.873780 4811 trace.go:236] Trace[564144898]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 10:08:41.588) (total time: 13285ms): Feb 19 10:08:54 crc kubenswrapper[4811]: Trace[564144898]: ---"Objects listed" error: 13284ms (10:08:54.873) Feb 19 10:08:54 crc kubenswrapper[4811]: Trace[564144898]: [13.28517018s] [13.28517018s] END Feb 19 10:08:54 crc kubenswrapper[4811]: I0219 10:08:54.873825 4811 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 19 10:08:54 crc kubenswrapper[4811]: I0219 10:08:54.876003 4811 trace.go:236] Trace[394343976]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 10:08:41.031) (total time: 13844ms): Feb 19 10:08:54 crc kubenswrapper[4811]: Trace[394343976]: ---"Objects listed" error: 13843ms (10:08:54.875) Feb 19 10:08:54 crc kubenswrapper[4811]: Trace[394343976]: [13.844026289s] [13.844026289s] END Feb 19 10:08:54 crc kubenswrapper[4811]: I0219 10:08:54.876031 4811 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 19 10:08:54 crc kubenswrapper[4811]: I0219 10:08:54.903081 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:08:54 crc kubenswrapper[4811]: I0219 10:08:54.977929 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 10:08:54 crc kubenswrapper[4811]: I0219 10:08:54.982230 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.369151 4811 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.370916 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.370954 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.370967 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.371102 4811 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.391631 4811 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.392125 4811 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.393480 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.393511 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.393524 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.393544 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.393558 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:55Z","lastTransitionTime":"2026-02-19T10:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:55 crc kubenswrapper[4811]: E0219 10:08:55.417269 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aa043662-c579-48da-bd33-d995796f09b6\\\",\\\"systemUUID\\\":\\\"0ae76a21-a2ee-4264-8d70-e6dc0f503661\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.420932 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.420961 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.420992 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.421006 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.421015 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:55Z","lastTransitionTime":"2026-02-19T10:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:55 crc kubenswrapper[4811]: E0219 10:08:55.437326 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aa043662-c579-48da-bd33-d995796f09b6\\\",\\\"systemUUID\\\":\\\"0ae76a21-a2ee-4264-8d70-e6dc0f503661\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.441697 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.441751 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.441767 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.441789 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.441804 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:55Z","lastTransitionTime":"2026-02-19T10:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:55 crc kubenswrapper[4811]: E0219 10:08:55.453620 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aa043662-c579-48da-bd33-d995796f09b6\\\",\\\"systemUUID\\\":\\\"0ae76a21-a2ee-4264-8d70-e6dc0f503661\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.456753 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.456788 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.456799 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.456814 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.456825 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:55Z","lastTransitionTime":"2026-02-19T10:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:55 crc kubenswrapper[4811]: E0219 10:08:55.466570 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aa043662-c579-48da-bd33-d995796f09b6\\\",\\\"systemUUID\\\":\\\"0ae76a21-a2ee-4264-8d70-e6dc0f503661\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.469885 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.469937 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.469950 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.469970 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.469983 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:55Z","lastTransitionTime":"2026-02-19T10:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.478026 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:08:55 crc kubenswrapper[4811]: E0219 10:08:55.479327 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aa043662-c579-48da-bd33-d995796f09b6\\\",\\\"systemUUID\\\":\\\"0ae76a21-a2ee-4264-8d70-e6dc0f503661\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:55 crc kubenswrapper[4811]: E0219 10:08:55.479534 4811 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.481982 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.482026 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.482040 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.482066 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.482081 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:55Z","lastTransitionTime":"2026-02-19T10:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.502620 4811 apiserver.go:52] "Watching apiserver" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.521734 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 21:46:56.351806243 +0000 UTC Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.584760 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.584802 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.584812 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.584828 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.584837 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:55Z","lastTransitionTime":"2026-02-19T10:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.661341 4811 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.661686 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.662076 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.662139 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.662190 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:08:55 crc kubenswrapper[4811]: E0219 10:08:55.662295 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:08:55 crc kubenswrapper[4811]: E0219 10:08:55.662357 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.662497 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.662518 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.662539 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:08:55 crc kubenswrapper[4811]: E0219 10:08:55.662595 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.665730 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.666053 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.666210 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.666305 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.666315 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.923608 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.933722 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.933771 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.933785 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.933802 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.933815 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:55Z","lastTransitionTime":"2026-02-19T10:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.934201 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.934235 4811 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.935279 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.937042 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 19 10:08:55 crc kubenswrapper[4811]: E0219 10:08:55.942125 4811 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.948994 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:08:55 crc kubenswrapper[4811]: E0219 10:08:55.959499 4811 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:08:55 crc kubenswrapper[4811]: I0219 10:08:55.960121 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.041270 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.041270 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.041325 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.041601 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.041900 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.042608 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.042775 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.042982 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.043160 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.043272 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.043352 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.043408 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.043306 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.043517 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.043579 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.043610 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.043665 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.043693 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.043714 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.043744 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.043799 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.043837 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.043889 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.043918 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.044050 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.044079 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.044131 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.044157 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.044211 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.044238 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.044298 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.044355 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.044395 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.044473 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.044504 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.044561 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.044592 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.044643 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.044676 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.044729 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.044755 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.044803 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.044834 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.044911 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.044921 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.044963 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.044996 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.045046 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.045072 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.045096 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.045145 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.045172 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.045222 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.045248 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.045297 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.045323 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.045347 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.045404 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.045471 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.045501 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.045560 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.045593 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.045625 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.045651 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.045675 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.045701 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.045725 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.045753 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.045780 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.045807 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.045834 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.045858 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.045883 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.045908 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.045937 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.045962 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.045990 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.046017 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.046105 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.046131 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.046183 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.046210 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.046287 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.046312 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.046335 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.046363 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.046387 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.046412 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.046435 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.046489 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.046516 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.046630 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.046656 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.046682 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.046706 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.046730 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.046759 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.046789 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.046834 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.046867 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.046902 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.046927 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.046952 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.046975 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.047001 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.047026 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.047049 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.047071 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.047095 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.047120 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.047144 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.047168 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.047191 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.047214 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.047243 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.047267 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.047291 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.047317 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.047342 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.047365 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.047388 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.047414 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.047440 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.047487 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.047510 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.047533 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.047557 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.047584 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.047609 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.047634 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.047660 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.047688 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.047749 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.047786 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.047813 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.047839 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.047867 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.047891 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.047916 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.047942 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.047968 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.047995 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.048027 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.048052 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.048078 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.048104 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.048129 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.048187 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.048227 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.048297 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.048327 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.048353 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.048380 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.048405 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.048429 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.048518 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.048572 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.048605 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.048692 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.048750 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.048779 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.048829 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.048951 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.049053 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.049086 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.049144 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.049185 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.049247 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.049275 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.049328 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.049352 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.049402 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.049431 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.049491 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.049520 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.049572 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.049599 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.049649 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.049675 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.049726 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.049763 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.049836 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.049893 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.049923 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.050009 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.050090 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.050130 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.050206 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.050286 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.050359 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.050433 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.050511 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.050590 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.050660 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.050702 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.050776 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.050839 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.050903 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.050934 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.051027 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.051094 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.051162 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.051196 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.051251 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.051282 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.051339 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.051392 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.051425 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.051483 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.051512 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.051604 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.051670 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.051738 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.051850 4811 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.051871 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.051943 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.051967 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.051967 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.051987 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.051954 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.052363 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.052387 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.052443 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.052654 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.052708 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.052872 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.052884 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.052984 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.053009 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.053082 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.053136 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.053183 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.053372 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.053390 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.053510 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.053613 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.053642 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.053680 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.043748 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.053740 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.053786 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.053805 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.053815 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.053833 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:56Z","lastTransitionTime":"2026-02-19T10:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.053982 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.054081 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.053905 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.054237 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.054417 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: E0219 10:08:56.055358 4811 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 10:08:56 crc kubenswrapper[4811]: E0219 10:08:56.055433 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 10:08:56.55540338 +0000 UTC m=+25.081868066 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.055864 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.057093 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://097b65ba9cbb94e97a9104f4bbaff80f560a22e0673d99158b5f86073aed8f9d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:37Z\\\",\\\"message\\\":\\\"W0219 10:08:36.479613 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 10:08:36.480107 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771495716 cert, and key in /tmp/serving-cert-2100221586/serving-signer.crt, /tmp/serving-cert-2100221586/serving-signer.key\\\\nI0219 10:08:37.047998 1 observer_polling.go:159] Starting file observer\\\\nW0219 10:08:37.051104 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 10:08:37.051351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 10:08:37.052877 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2100221586/tls.crt::/tmp/serving-cert-2100221586/tls.key\\\\\\\"\\\\nF0219 10:08:37.563687 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.057606 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.058109 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.058183 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.058294 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.058764 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.058821 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.058830 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.058854 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.058868 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: E0219 10:08:56.058945 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:08:56.558918442 +0000 UTC m=+25.085383118 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.059317 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.059423 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.059839 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.060193 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.060557 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.060572 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.060796 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.061185 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.060311 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.059167 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.065117 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.066845 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.066854 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.068104 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.068409 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.070582 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.070725 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.070785 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: E0219 10:08:56.070872 4811 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 10:08:56 crc kubenswrapper[4811]: E0219 10:08:56.070945 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 10:08:56.570926294 +0000 UTC m=+25.097390980 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.071485 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.071998 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.072401 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.073620 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.074219 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.074426 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.074814 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.074878 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.075249 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.075508 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.076374 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.076767 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.077160 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.077574 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.078163 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.078564 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.083093 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.083669 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.084855 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.085284 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.085296 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.085725 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.086007 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.094107 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.094181 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.069538 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.094222 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.094968 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.095126 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.095849 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.096048 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.096298 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.096522 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.096803 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.097139 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.097597 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.097779 4811 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.105957 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.106530 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.106837 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.107152 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.107396 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.107696 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.107903 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.108356 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.108749 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.109114 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.109201 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.109513 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.109842 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.111206 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.111625 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.112121 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.112188 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.112788 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.112997 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: E0219 10:08:56.113012 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 10:08:56 crc kubenswrapper[4811]: E0219 10:08:56.113066 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.113363 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: E0219 10:08:56.113707 4811 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 10:08:56 crc kubenswrapper[4811]: E0219 10:08:56.113797 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 10:08:56.613776359 +0000 UTC m=+25.140241225 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.114338 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.114798 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.115123 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.115591 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.116070 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.116374 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.116607 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.116856 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.117041 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.117317 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.117319 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.117694 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.117780 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.118231 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.118034 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.118654 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.118756 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.119424 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.119674 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.119749 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.119792 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.119912 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.120180 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.120206 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.120230 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.120327 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.120335 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.120650 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.120856 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.120870 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.121109 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.121118 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.121352 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.121795 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.122031 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.122334 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.122397 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.122567 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.122600 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.122597 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.122907 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.123298 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.123336 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.123420 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.123572 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.123821 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.128904 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.129044 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.129291 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.129349 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.129624 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.129644 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.130672 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.131381 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.131574 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.131679 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.131973 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.132190 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.132226 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 10:08:56 crc kubenswrapper[4811]: E0219 10:08:56.132655 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 10:08:56 crc kubenswrapper[4811]: E0219 10:08:56.132697 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 10:08:56 crc kubenswrapper[4811]: E0219 10:08:56.132720 4811 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 10:08:56 crc kubenswrapper[4811]: E0219 10:08:56.132812 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 10:08:56.632764404 +0000 UTC m=+25.159229280 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.132797 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.132654 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.132870 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.133461 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.136069 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.136730 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.136773 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.137871 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.138070 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.138181 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.138247 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.138333 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.138419 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.138604 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.138656 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.141876 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.142216 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.142622 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.145330 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.152073 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.152291 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.152471 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.152582 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.152607 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.152667 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.152678 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.152687 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: W0219 10:08:56.152686 4811 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.152707 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.152698 4811 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.152765 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.152784 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.152802 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.152838 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.152873 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.152917 4811 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.152922 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.152936 4811 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.156166 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157053 4811 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157087 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157099 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157110 4811 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157119 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157129 4811 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157138 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157148 4811 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157157 4811 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157166 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157176 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157185 4811 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157194 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157205 4811 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157217 4811 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157226 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157235 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157245 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157254 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157265 4811 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157275 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157283 4811 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157292 4811 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157301 4811 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157310 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157318 4811 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157328 4811 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157338 4811 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157350 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157361 4811 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157374 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157382 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157390 4811 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157399 4811 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157409 4811 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157419 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157430 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157440 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157468 4811 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157477 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157486 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157495 4811 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157504 4811 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157512 4811 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157523 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157532 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157540 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157552 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157563 4811 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157570 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157579 4811 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157588 4811 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157597 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157605 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157615 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157624 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157633 4811 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157642 4811 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157651 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157660 4811 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157669 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157678 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157686 4811 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157694 4811 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157702 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157710 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157719 4811 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157727 4811 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157738 4811 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157746 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157754 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157763 4811 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157771 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157780 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157788 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157799 4811 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157807 4811 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157816 4811 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157827 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157837 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157846 4811 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157854 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157862 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157875 4811 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157883 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157892 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157901 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157910 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157919 4811 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157928 4811 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157939 4811 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157949 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157958 4811 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157923 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.157967 4811 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158082 4811 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158090 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158098 4811 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158226 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158240 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158254 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158267 4811 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158278 4811 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158288 4811 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158299 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158310 4811 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158328 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158342 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158353 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158363 4811 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158373 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158397 4811 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158409 4811 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158563 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158573 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158584 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158617 4811 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158628 4811 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158639 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158650 4811 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158680 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158692 4811 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158703 4811 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158713 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158725 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158735 4811 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158745 4811 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158755 4811 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158767 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158778 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158791 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158803 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158814 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158824 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158835 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158845 4811 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158856 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158866 4811 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158877 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158892 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158910 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158924 4811 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158936 4811 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158948 4811 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158961 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158972 4811 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158986 4811 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.158997 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.159009 4811 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.159022 4811 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.159033 4811 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.159044 4811 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.159057 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.159067 4811 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.159077 4811 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.159087 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.159103 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.159125 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.159141 4811 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.159157 4811 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.159169 4811 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.159179 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.159053 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.159191 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.159416 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.159440 4811 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.159479 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.159495 4811 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.159510 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.159526 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.159543 4811 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.159562 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.159577 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.159591 4811 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.159606 4811 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.159621 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.159635 4811 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.161889 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.164293 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.164342 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.164362 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.164388 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.164404 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:56Z","lastTransitionTime":"2026-02-19T10:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.165598 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.165722 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.169522 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.185301 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.187481 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.215807 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.220834 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-bpzh7"] Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.221283 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bpzh7" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.224177 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-v97zm"] Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.227605 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.249808 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.250192 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.250422 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-9s768"] Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.251025 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.251211 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.251369 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-lxbrg"] Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.251407 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.251533 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ln785"] Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.251651 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.251668 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-9s768" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.253015 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.253693 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.263018 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e2e42c80-bece-4066-abad-05bc661d33f5-mcd-auth-proxy-config\") pod \"machine-config-daemon-v97zm\" (UID: \"e2e42c80-bece-4066-abad-05bc661d33f5\") " pod="openshift-machine-config-operator/machine-config-daemon-v97zm" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.263089 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/98110a48-08c0-4227-93dc-cfadaa45b044-cnibin\") pod \"multus-additional-cni-plugins-9s768\" (UID: \"98110a48-08c0-4227-93dc-cfadaa45b044\") " pod="openshift-multus/multus-additional-cni-plugins-9s768" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.263130 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj9g6\" (UniqueName: \"kubernetes.io/projected/221c25e2-b834-4fe6-83e4-714328c60b4e-kube-api-access-sj9g6\") pod \"node-resolver-bpzh7\" (UID: \"221c25e2-b834-4fe6-83e4-714328c60b4e\") " pod="openshift-dns/node-resolver-bpzh7" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.263176 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/98110a48-08c0-4227-93dc-cfadaa45b044-os-release\") pod \"multus-additional-cni-plugins-9s768\" (UID: \"98110a48-08c0-4227-93dc-cfadaa45b044\") " pod="openshift-multus/multus-additional-cni-plugins-9s768" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.263207 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx9bn\" (UniqueName: \"kubernetes.io/projected/98110a48-08c0-4227-93dc-cfadaa45b044-kube-api-access-zx9bn\") pod \"multus-additional-cni-plugins-9s768\" (UID: \"98110a48-08c0-4227-93dc-cfadaa45b044\") " pod="openshift-multus/multus-additional-cni-plugins-9s768" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.263241 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e2e42c80-bece-4066-abad-05bc661d33f5-rootfs\") pod \"machine-config-daemon-v97zm\" (UID: \"e2e42c80-bece-4066-abad-05bc661d33f5\") " pod="openshift-machine-config-operator/machine-config-daemon-v97zm" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.263272 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/98110a48-08c0-4227-93dc-cfadaa45b044-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9s768\" (UID: \"98110a48-08c0-4227-93dc-cfadaa45b044\") " pod="openshift-multus/multus-additional-cni-plugins-9s768" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.263327 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kvl6\" (UniqueName: \"kubernetes.io/projected/e2e42c80-bece-4066-abad-05bc661d33f5-kube-api-access-9kvl6\") pod \"machine-config-daemon-v97zm\" (UID: \"e2e42c80-bece-4066-abad-05bc661d33f5\") " pod="openshift-machine-config-operator/machine-config-daemon-v97zm" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.263425 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e2e42c80-bece-4066-abad-05bc661d33f5-proxy-tls\") pod \"machine-config-daemon-v97zm\" (UID: \"e2e42c80-bece-4066-abad-05bc661d33f5\") " pod="openshift-machine-config-operator/machine-config-daemon-v97zm" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.263490 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/98110a48-08c0-4227-93dc-cfadaa45b044-cni-binary-copy\") pod \"multus-additional-cni-plugins-9s768\" (UID: \"98110a48-08c0-4227-93dc-cfadaa45b044\") " pod="openshift-multus/multus-additional-cni-plugins-9s768" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.263529 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/221c25e2-b834-4fe6-83e4-714328c60b4e-hosts-file\") pod \"node-resolver-bpzh7\" (UID: \"221c25e2-b834-4fe6-83e4-714328c60b4e\") " pod="openshift-dns/node-resolver-bpzh7" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.263553 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/98110a48-08c0-4227-93dc-cfadaa45b044-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9s768\" (UID: \"98110a48-08c0-4227-93dc-cfadaa45b044\") " pod="openshift-multus/multus-additional-cni-plugins-9s768" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.263607 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/98110a48-08c0-4227-93dc-cfadaa45b044-system-cni-dir\") pod \"multus-additional-cni-plugins-9s768\" (UID: \"98110a48-08c0-4227-93dc-cfadaa45b044\") " pod="openshift-multus/multus-additional-cni-plugins-9s768" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.263671 4811 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.263690 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.263705 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.263721 4811 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.263734 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.263747 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.263761 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.263777 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.264146 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.264592 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.264846 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.264998 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.265037 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.265067 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.265024 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.265314 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.265521 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.265552 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.265707 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.265896 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.269822 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.270749 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.270787 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.270803 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.270824 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.270838 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:56Z","lastTransitionTime":"2026-02-19T10:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.271244 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.272249 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.275586 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.275672 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.275707 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.275595 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.275943 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.286636 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.301600 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.326168 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://097b65ba9cbb94e97a9104f4bbaff80f560a22e0673d99158b5f86073aed8f9d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:37Z\\\",\\\"message\\\":\\\"W0219 10:08:36.479613 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 10:08:36.480107 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771495716 cert, and key in /tmp/serving-cert-2100221586/serving-signer.crt, /tmp/serving-cert-2100221586/serving-signer.key\\\\nI0219 10:08:37.047998 1 observer_polling.go:159] Starting file observer\\\\nW0219 10:08:37.051104 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 10:08:37.051351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 10:08:37.052877 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2100221586/tls.crt::/tmp/serving-cert-2100221586/tls.key\\\\\\\"\\\\nF0219 10:08:37.563687 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.365465 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-multus-cni-dir\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.365538 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-os-release\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.365581 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-etc-kubernetes\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.365615 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e2e42c80-bece-4066-abad-05bc661d33f5-rootfs\") pod \"machine-config-daemon-v97zm\" (UID: \"e2e42c80-bece-4066-abad-05bc661d33f5\") " pod="openshift-machine-config-operator/machine-config-daemon-v97zm" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.365645 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/98110a48-08c0-4227-93dc-cfadaa45b044-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9s768\" (UID: \"98110a48-08c0-4227-93dc-cfadaa45b044\") " pod="openshift-multus/multus-additional-cni-plugins-9s768" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.365667 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/554fedad-397a-4827-b5ad-48a681a7a2e5-ovn-node-metrics-cert\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.365700 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-host-var-lib-cni-bin\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.365722 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-log-socket\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.365744 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-host-var-lib-kubelet\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.365763 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trc25\" (UniqueName: \"kubernetes.io/projected/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-kube-api-access-trc25\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.365784 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/98110a48-08c0-4227-93dc-cfadaa45b044-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9s768\" (UID: \"98110a48-08c0-4227-93dc-cfadaa45b044\") " pod="openshift-multus/multus-additional-cni-plugins-9s768" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.365803 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-host-run-multus-certs\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.365828 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-slash\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.365856 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-run-openvswitch\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.365874 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-cni-bin\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.365899 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/98110a48-08c0-4227-93dc-cfadaa45b044-system-cni-dir\") pod \"multus-additional-cni-plugins-9s768\" (UID: \"98110a48-08c0-4227-93dc-cfadaa45b044\") " pod="openshift-multus/multus-additional-cni-plugins-9s768" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.365920 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-system-cni-dir\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.365939 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/554fedad-397a-4827-b5ad-48a681a7a2e5-ovnkube-config\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.365962 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/98110a48-08c0-4227-93dc-cfadaa45b044-cnibin\") pod \"multus-additional-cni-plugins-9s768\" (UID: \"98110a48-08c0-4227-93dc-cfadaa45b044\") " pod="openshift-multus/multus-additional-cni-plugins-9s768" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.365978 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-host-var-lib-cni-multus\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.365998 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-var-lib-openvswitch\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.366024 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.366043 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj9g6\" (UniqueName: \"kubernetes.io/projected/221c25e2-b834-4fe6-83e4-714328c60b4e-kube-api-access-sj9g6\") pod \"node-resolver-bpzh7\" (UID: \"221c25e2-b834-4fe6-83e4-714328c60b4e\") " pod="openshift-dns/node-resolver-bpzh7" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.366059 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-multus-socket-dir-parent\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.366079 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-cni-netd\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.366100 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/98110a48-08c0-4227-93dc-cfadaa45b044-os-release\") pod \"multus-additional-cni-plugins-9s768\" (UID: \"98110a48-08c0-4227-93dc-cfadaa45b044\") " pod="openshift-multus/multus-additional-cni-plugins-9s768" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.366120 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx9bn\" (UniqueName: \"kubernetes.io/projected/98110a48-08c0-4227-93dc-cfadaa45b044-kube-api-access-zx9bn\") pod \"multus-additional-cni-plugins-9s768\" (UID: \"98110a48-08c0-4227-93dc-cfadaa45b044\") " pod="openshift-multus/multus-additional-cni-plugins-9s768" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.366140 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-run-netns\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.366161 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-etc-openvswitch\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.366178 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-cnibin\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.366193 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-multus-conf-dir\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.366207 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-kubelet\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.366223 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-systemd-units\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.366238 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkbcd\" (UniqueName: \"kubernetes.io/projected/554fedad-397a-4827-b5ad-48a681a7a2e5-kube-api-access-mkbcd\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.366264 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kvl6\" (UniqueName: \"kubernetes.io/projected/e2e42c80-bece-4066-abad-05bc661d33f5-kube-api-access-9kvl6\") pod \"machine-config-daemon-v97zm\" (UID: \"e2e42c80-bece-4066-abad-05bc661d33f5\") " pod="openshift-machine-config-operator/machine-config-daemon-v97zm" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.366280 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-host-run-netns\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.366296 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-node-log\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.366316 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e2e42c80-bece-4066-abad-05bc661d33f5-proxy-tls\") pod \"machine-config-daemon-v97zm\" (UID: \"e2e42c80-bece-4066-abad-05bc661d33f5\") " pod="openshift-machine-config-operator/machine-config-daemon-v97zm" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.366333 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/98110a48-08c0-4227-93dc-cfadaa45b044-cni-binary-copy\") pod \"multus-additional-cni-plugins-9s768\" (UID: \"98110a48-08c0-4227-93dc-cfadaa45b044\") " pod="openshift-multus/multus-additional-cni-plugins-9s768" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.366348 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-cni-binary-copy\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.366367 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-run-systemd\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.366385 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/221c25e2-b834-4fe6-83e4-714328c60b4e-hosts-file\") pod \"node-resolver-bpzh7\" (UID: \"221c25e2-b834-4fe6-83e4-714328c60b4e\") " pod="openshift-dns/node-resolver-bpzh7" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.366404 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-multus-daemon-config\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.366419 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-run-ovn-kubernetes\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.366435 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-host-run-k8s-cni-cncf-io\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.366472 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-run-ovn\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.366488 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/554fedad-397a-4827-b5ad-48a681a7a2e5-ovnkube-script-lib\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.366509 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e2e42c80-bece-4066-abad-05bc661d33f5-mcd-auth-proxy-config\") pod \"machine-config-daemon-v97zm\" (UID: \"e2e42c80-bece-4066-abad-05bc661d33f5\") " pod="openshift-machine-config-operator/machine-config-daemon-v97zm" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.366534 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-hostroot\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.366550 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/554fedad-397a-4827-b5ad-48a681a7a2e5-env-overrides\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.366666 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e2e42c80-bece-4066-abad-05bc661d33f5-rootfs\") pod \"machine-config-daemon-v97zm\" (UID: \"e2e42c80-bece-4066-abad-05bc661d33f5\") " pod="openshift-machine-config-operator/machine-config-daemon-v97zm" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.367401 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/98110a48-08c0-4227-93dc-cfadaa45b044-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9s768\" (UID: \"98110a48-08c0-4227-93dc-cfadaa45b044\") " pod="openshift-multus/multus-additional-cni-plugins-9s768" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.368966 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/98110a48-08c0-4227-93dc-cfadaa45b044-system-cni-dir\") pod \"multus-additional-cni-plugins-9s768\" (UID: \"98110a48-08c0-4227-93dc-cfadaa45b044\") " pod="openshift-multus/multus-additional-cni-plugins-9s768" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.369094 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/98110a48-08c0-4227-93dc-cfadaa45b044-cnibin\") pod \"multus-additional-cni-plugins-9s768\" (UID: \"98110a48-08c0-4227-93dc-cfadaa45b044\") " pod="openshift-multus/multus-additional-cni-plugins-9s768" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.369114 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/98110a48-08c0-4227-93dc-cfadaa45b044-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9s768\" (UID: \"98110a48-08c0-4227-93dc-cfadaa45b044\") " pod="openshift-multus/multus-additional-cni-plugins-9s768" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.369809 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/98110a48-08c0-4227-93dc-cfadaa45b044-os-release\") pod \"multus-additional-cni-plugins-9s768\" (UID: \"98110a48-08c0-4227-93dc-cfadaa45b044\") " pod="openshift-multus/multus-additional-cni-plugins-9s768" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.369996 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/221c25e2-b834-4fe6-83e4-714328c60b4e-hosts-file\") pod \"node-resolver-bpzh7\" (UID: \"221c25e2-b834-4fe6-83e4-714328c60b4e\") " pod="openshift-dns/node-resolver-bpzh7" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.369998 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.373186 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/98110a48-08c0-4227-93dc-cfadaa45b044-cni-binary-copy\") pod \"multus-additional-cni-plugins-9s768\" (UID: \"98110a48-08c0-4227-93dc-cfadaa45b044\") " pod="openshift-multus/multus-additional-cni-plugins-9s768" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.373566 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e2e42c80-bece-4066-abad-05bc661d33f5-mcd-auth-proxy-config\") pod \"machine-config-daemon-v97zm\" (UID: \"e2e42c80-bece-4066-abad-05bc661d33f5\") " pod="openshift-machine-config-operator/machine-config-daemon-v97zm" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.381193 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e2e42c80-bece-4066-abad-05bc661d33f5-proxy-tls\") pod \"machine-config-daemon-v97zm\" (UID: \"e2e42c80-bece-4066-abad-05bc661d33f5\") " pod="openshift-machine-config-operator/machine-config-daemon-v97zm" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.396088 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kvl6\" (UniqueName: \"kubernetes.io/projected/e2e42c80-bece-4066-abad-05bc661d33f5-kube-api-access-9kvl6\") pod \"machine-config-daemon-v97zm\" (UID: \"e2e42c80-bece-4066-abad-05bc661d33f5\") " pod="openshift-machine-config-operator/machine-config-daemon-v97zm" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.397939 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.398066 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.398196 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.398270 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.398330 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:56Z","lastTransitionTime":"2026-02-19T10:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.409307 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx9bn\" (UniqueName: \"kubernetes.io/projected/98110a48-08c0-4227-93dc-cfadaa45b044-kube-api-access-zx9bn\") pod \"multus-additional-cni-plugins-9s768\" (UID: \"98110a48-08c0-4227-93dc-cfadaa45b044\") " pod="openshift-multus/multus-additional-cni-plugins-9s768" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.420234 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.421418 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj9g6\" (UniqueName: \"kubernetes.io/projected/221c25e2-b834-4fe6-83e4-714328c60b4e-kube-api-access-sj9g6\") pod \"node-resolver-bpzh7\" (UID: \"221c25e2-b834-4fe6-83e4-714328c60b4e\") " pod="openshift-dns/node-resolver-bpzh7" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.459804 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.469021 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-host-run-netns\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.469084 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-node-log\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.469130 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-cni-binary-copy\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.469151 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-run-systemd\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.469170 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-multus-daemon-config\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.469187 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-run-ovn-kubernetes\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.469206 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-host-run-k8s-cni-cncf-io\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.469222 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-run-ovn\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.469239 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/554fedad-397a-4827-b5ad-48a681a7a2e5-ovnkube-script-lib\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.469261 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-hostroot\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.469288 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/554fedad-397a-4827-b5ad-48a681a7a2e5-env-overrides\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.469317 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-multus-cni-dir\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.469336 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-os-release\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.469358 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-etc-kubernetes\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.469380 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/554fedad-397a-4827-b5ad-48a681a7a2e5-ovn-node-metrics-cert\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.469404 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-host-var-lib-cni-bin\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.469422 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-log-socket\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.469464 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trc25\" (UniqueName: \"kubernetes.io/projected/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-kube-api-access-trc25\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.469490 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-host-var-lib-kubelet\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.469511 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-host-run-multus-certs\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.469528 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-slash\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.469548 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-run-openvswitch\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.469566 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-cni-bin\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.469585 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-system-cni-dir\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.469601 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/554fedad-397a-4827-b5ad-48a681a7a2e5-ovnkube-config\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.469628 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-host-var-lib-cni-multus\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.469645 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-var-lib-openvswitch\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.469666 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.469695 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-multus-socket-dir-parent\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.469722 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-cni-netd\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.469741 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-run-netns\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.469758 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-etc-openvswitch\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.469776 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-multus-conf-dir\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.469792 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-kubelet\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.469810 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-cnibin\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.469829 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-systemd-units\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.469847 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkbcd\" (UniqueName: \"kubernetes.io/projected/554fedad-397a-4827-b5ad-48a681a7a2e5-kube-api-access-mkbcd\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.470317 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-host-run-netns\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.470358 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-node-log\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.471086 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-cni-binary-copy\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.471148 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-run-systemd\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.471620 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-multus-daemon-config\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.471666 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-run-ovn-kubernetes\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.471693 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-host-run-k8s-cni-cncf-io\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.471724 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-run-ovn\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.472314 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/554fedad-397a-4827-b5ad-48a681a7a2e5-ovnkube-script-lib\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.472363 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-hostroot\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.472804 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/554fedad-397a-4827-b5ad-48a681a7a2e5-env-overrides\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.473193 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-multus-cni-dir\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.473658 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-os-release\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.473842 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-host-var-lib-cni-multus\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.473965 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-var-lib-openvswitch\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.474080 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.474214 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-multus-socket-dir-parent\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.474296 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/554fedad-397a-4827-b5ad-48a681a7a2e5-ovnkube-config\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.474363 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-etc-kubernetes\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.474967 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-cni-netd\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.475095 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-run-netns\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.475211 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-etc-openvswitch\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.475347 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-multus-conf-dir\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.475483 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-kubelet\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.475625 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-cnibin\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.475736 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-systemd-units\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.475854 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-host-run-multus-certs\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.475895 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-run-openvswitch\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.476031 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-log-socket\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.475956 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-cni-bin\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.476004 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-system-cni-dir\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.476105 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-host-var-lib-cni-bin\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.475926 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-slash\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.476325 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-host-var-lib-kubelet\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.481392 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/554fedad-397a-4827-b5ad-48a681a7a2e5-ovn-node-metrics-cert\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.492798 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.518416 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trc25\" (UniqueName: \"kubernetes.io/projected/9766aa7e-a66b-4efc-a5de-2fe49ef00eb8-kube-api-access-trc25\") pod \"multus-lxbrg\" (UID: \"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\") " pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.521985 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 21:23:50.990494595 +0000 UTC Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.522071 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.522121 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.522134 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.522156 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.522171 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:56Z","lastTransitionTime":"2026-02-19T10:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.525312 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkbcd\" (UniqueName: \"kubernetes.io/projected/554fedad-397a-4827-b5ad-48a681a7a2e5-kube-api-access-mkbcd\") pod \"ovnkube-node-ln785\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.542159 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.558267 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.571055 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.571176 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.571202 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:08:56 crc kubenswrapper[4811]: E0219 10:08:56.571317 4811 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 10:08:56 crc kubenswrapper[4811]: E0219 10:08:56.571371 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 10:08:57.571354379 +0000 UTC m=+26.097819065 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 10:08:56 crc kubenswrapper[4811]: E0219 10:08:56.571424 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:08:57.571418431 +0000 UTC m=+26.097883117 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:08:56 crc kubenswrapper[4811]: E0219 10:08:56.571490 4811 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 10:08:56 crc kubenswrapper[4811]: E0219 10:08:56.571511 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 10:08:57.571506053 +0000 UTC m=+26.097970739 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.579733 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bpzh7" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.593381 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.594249 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.594894 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.597193 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.598140 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.599313 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.599910 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.601114 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.601696 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lxbrg" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.602322 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.603682 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.605121 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.606288 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.609150 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.611083 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.613362 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.613728 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-9s768" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.614597 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.615189 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.616782 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.617387 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.617775 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98110a48-08c0-4227-93dc-cfadaa45b044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9s768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.618056 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.619260 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.620529 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.621198 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.622321 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.623062 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.623066 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.624198 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.624228 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.624237 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.624254 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.624264 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:56Z","lastTransitionTime":"2026-02-19T10:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.625766 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.626423 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.628097 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.629134 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.631977 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.634476 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.635155 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.635802 4811 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.636340 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.638978 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.639538 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.641128 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.643072 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.643971 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.645427 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.646292 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.648185 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.648738 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.649792 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.652570 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.653317 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.654710 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: W0219 10:08:56.654926 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98110a48_08c0_4227_93dc_cfadaa45b044.slice/crio-db14fe10fb2c8b47dafa473084bf39165ee579459d1e75ecf53d8c6033943418 WatchSource:0}: Error finding container db14fe10fb2c8b47dafa473084bf39165ee579459d1e75ecf53d8c6033943418: Status 404 returned error can't find the container with id db14fe10fb2c8b47dafa473084bf39165ee579459d1e75ecf53d8c6033943418 Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.655196 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"554fedad-397a-4827-b5ad-48a681a7a2e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ln785\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.655471 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.656593 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.657416 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.658970 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.659570 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.660603 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.661776 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.662503 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.663625 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 19 10:08:56 crc kubenswrapper[4811]: W0219 10:08:56.664040 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod554fedad_397a_4827_b5ad_48a681a7a2e5.slice/crio-f27ea24930a1682da92e0fc4a861414130084d8eaa1a23cf28f9fb0da0af8e06 WatchSource:0}: Error finding container f27ea24930a1682da92e0fc4a861414130084d8eaa1a23cf28f9fb0da0af8e06: Status 404 returned error can't find the container with id f27ea24930a1682da92e0fc4a861414130084d8eaa1a23cf28f9fb0da0af8e06 Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.669705 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.671947 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.671992 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:08:56 crc kubenswrapper[4811]: E0219 10:08:56.672166 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 10:08:56 crc kubenswrapper[4811]: E0219 10:08:56.672194 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 10:08:56 crc kubenswrapper[4811]: E0219 10:08:56.672209 4811 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 10:08:56 crc kubenswrapper[4811]: E0219 10:08:56.672268 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 10:08:57.672248915 +0000 UTC m=+26.198713591 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 10:08:56 crc kubenswrapper[4811]: E0219 10:08:56.672329 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 10:08:56 crc kubenswrapper[4811]: E0219 10:08:56.672341 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 10:08:56 crc kubenswrapper[4811]: E0219 10:08:56.672349 4811 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 10:08:56 crc kubenswrapper[4811]: E0219 10:08:56.672370 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 10:08:57.672363588 +0000 UTC m=+26.198828274 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.681034 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://097b65ba9cbb94e97a9104f4bbaff80f560a22e0673d99158b5f86073aed8f9d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:37Z\\\",\\\"message\\\":\\\"W0219 10:08:36.479613 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 10:08:36.480107 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771495716 cert, and key in /tmp/serving-cert-2100221586/serving-signer.crt, /tmp/serving-cert-2100221586/serving-signer.key\\\\nI0219 10:08:37.047998 1 observer_polling.go:159] Starting file observer\\\\nW0219 10:08:37.051104 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 10:08:37.051351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 10:08:37.052877 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2100221586/tls.crt::/tmp/serving-cert-2100221586/tls.key\\\\\\\"\\\\nF0219 10:08:37.563687 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.690424 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.702944 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e42c80-bece-4066-abad-05bc661d33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v97zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.714170 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.725285 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.729369 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.729395 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.729404 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.729432 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.729442 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:56Z","lastTransitionTime":"2026-02-19T10:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.736525 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.747145 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.757365 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.771662 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxbrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trc25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxbrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.846914 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.846948 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.846956 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.846973 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.846982 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:56Z","lastTransitionTime":"2026-02-19T10:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.937086 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" event={"ID":"e2e42c80-bece-4066-abad-05bc661d33f5","Type":"ContainerStarted","Data":"85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397"} Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.937166 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" event={"ID":"e2e42c80-bece-4066-abad-05bc661d33f5","Type":"ContainerStarted","Data":"cc1a159328c3e1f72e38c5198783079460333b61d65e64d0e89b32675e0511dc"} Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.939314 4811 generic.go:334] "Generic (PLEG): container finished" podID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerID="44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019" exitCode=0 Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.939405 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" event={"ID":"554fedad-397a-4827-b5ad-48a681a7a2e5","Type":"ContainerDied","Data":"44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019"} Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.939466 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" event={"ID":"554fedad-397a-4827-b5ad-48a681a7a2e5","Type":"ContainerStarted","Data":"f27ea24930a1682da92e0fc4a861414130084d8eaa1a23cf28f9fb0da0af8e06"} Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.941263 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bpzh7" event={"ID":"221c25e2-b834-4fe6-83e4-714328c60b4e","Type":"ContainerStarted","Data":"f6aa068e9cf02926b188c07d0e0f140b7530951f81522c482c8a1b90a712ae0c"} Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.949217 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.949271 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.949266 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f8ede5032c6812f2b019538d253d2c07cf1659f30fcc4c52b27e6a091fb0758f"} Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.949323 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c005a321618306472457a36ad24c5550bd0d2320bd395d6c022a84a7fdebbb01"} Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.949292 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.949341 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e64b91b7354811b9c9768bd9e6e7801116322284f710bbc28dfc7811a2422847"} Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.949361 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.949378 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:56Z","lastTransitionTime":"2026-02-19T10:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.952521 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lxbrg" event={"ID":"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8","Type":"ContainerStarted","Data":"03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7"} Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.952595 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lxbrg" event={"ID":"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8","Type":"ContainerStarted","Data":"13d4585364c9166c20ea007def6dff533ad67d9f3d92b8c39624fad4cf5cd20e"} Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.953704 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.953881 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"6545039ad25f6743350fa58290772627ed332449545234466ea67e9256d4d6e1"} Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.957196 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.957948 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.960310 4811 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a" exitCode=255 Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.960400 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a"} Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.960476 4811 scope.go:117] "RemoveContainer" containerID="097b65ba9cbb94e97a9104f4bbaff80f560a22e0673d99158b5f86073aed8f9d" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.960848 4811 scope.go:117] "RemoveContainer" containerID="5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a" Feb 19 10:08:56 crc kubenswrapper[4811]: E0219 10:08:56.961033 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.961317 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" event={"ID":"98110a48-08c0-4227-93dc-cfadaa45b044","Type":"ContainerStarted","Data":"db14fe10fb2c8b47dafa473084bf39165ee579459d1e75ecf53d8c6033943418"} Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.963468 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"f9c18ee97d5e19f635541477ce6c6eb5fe1417551582bb34c9e0ec77a134d660"} Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.963499 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ec9b414d73d7ec139a7f88cfcb2fb5e7c8dd52d4af68287fd8a8f05009304033"} Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.966651 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98110a48-08c0-4227-93dc-cfadaa45b044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9s768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.984706 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"554fedad-397a-4827-b5ad-48a681a7a2e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ln785\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:56 crc kubenswrapper[4811]: I0219 10:08:56.997294 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.011587 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://097b65ba9cbb94e97a9104f4bbaff80f560a22e0673d99158b5f86073aed8f9d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:37Z\\\",\\\"message\\\":\\\"W0219 10:08:36.479613 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 10:08:36.480107 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771495716 cert, and key in /tmp/serving-cert-2100221586/serving-signer.crt, /tmp/serving-cert-2100221586/serving-signer.key\\\\nI0219 10:08:37.047998 1 observer_polling.go:159] Starting file observer\\\\nW0219 10:08:37.051104 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 10:08:37.051351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 10:08:37.052877 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2100221586/tls.crt::/tmp/serving-cert-2100221586/tls.key\\\\\\\"\\\\nF0219 10:08:37.563687 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.019769 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.025625 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.031669 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.041772 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.052355 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.052388 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.052396 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.052410 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.052419 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:57Z","lastTransitionTime":"2026-02-19T10:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.052472 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.063409 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.075183 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e42c80-bece-4066-abad-05bc661d33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v97zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.089897 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.101928 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxbrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trc25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxbrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.113126 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.124940 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxbrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trc25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxbrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.136865 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.152122 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98110a48-08c0-4227-93dc-cfadaa45b044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9s768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.154855 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.154907 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.154919 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.154937 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.154950 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:57Z","lastTransitionTime":"2026-02-19T10:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.174914 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"554fedad-397a-4827-b5ad-48a681a7a2e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ln785\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.187332 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.204819 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://097b65ba9cbb94e97a9104f4bbaff80f560a22e0673d99158b5f86073aed8f9d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:37Z\\\",\\\"message\\\":\\\"W0219 10:08:36.479613 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 10:08:36.480107 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771495716 cert, and key in /tmp/serving-cert-2100221586/serving-signer.crt, /tmp/serving-cert-2100221586/serving-signer.key\\\\nI0219 10:08:37.047998 1 observer_polling.go:159] Starting file observer\\\\nW0219 10:08:37.051104 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 10:08:37.051351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 10:08:37.052877 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2100221586/tls.crt::/tmp/serving-cert-2100221586/tls.key\\\\\\\"\\\\nF0219 10:08:37.563687 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI0219 10:08:54.897941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0219 10:08:54.897951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0219 10:08:54.898073 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771495718\\\\\\\\\\\\\\\" (2026-02-19 10:08:37 +0000 UTC to 2026-03-21 10:08:38 +0000 UTC (now=2026-02-19 10:08:54.898015179 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898163 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\"\\\\nI0219 10:08:54.898747 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771495728\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771495728\\\\\\\\\\\\\\\" (2026-02-19 09:08:48 +0000 UTC to 2027-02-19 09:08:48 +0000 UTC (now=2026-02-19 10:08:54.898701245 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898777 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0219 10:08:54.898804 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0219 10:08:54.898823 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 10:08:54.900253 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900328 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900654 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0219 10:08:54.907320 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.216281 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.231104 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c18ee97d5e19f635541477ce6c6eb5fe1417551582bb34c9e0ec77a134d660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.245850 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.258269 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.258322 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.258336 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.258355 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.258367 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:57Z","lastTransitionTime":"2026-02-19T10:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.260674 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8ede5032c6812f2b019538d253d2c07cf1659f30fcc4c52b27e6a091fb0758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c005a321618306472457a36ad24c5550bd0d2320bd395d6c022a84a7fdebbb01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.281570 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.291404 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e42c80-bece-4066-abad-05bc661d33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v97zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.293694 4811 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.305505 4811 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.327461 4811 csr.go:261] certificate signing request csr-6k2sk is approved, waiting to be issued Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.338314 4811 csr.go:257] certificate signing request csr-6k2sk is issued Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.360945 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.360993 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.361006 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.361027 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.361041 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:57Z","lastTransitionTime":"2026-02-19T10:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.463415 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.463467 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.463479 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.463493 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.463502 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:57Z","lastTransitionTime":"2026-02-19T10:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.522705 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 00:13:35.907538477 +0000 UTC Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.565717 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.565751 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.565761 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.565773 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.565782 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:57Z","lastTransitionTime":"2026-02-19T10:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.580284 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.580411 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.580461 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:08:57 crc kubenswrapper[4811]: E0219 10:08:57.580603 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:08:59.580567066 +0000 UTC m=+28.107031752 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:08:57 crc kubenswrapper[4811]: E0219 10:08:57.580637 4811 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 10:08:57 crc kubenswrapper[4811]: E0219 10:08:57.580622 4811 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 10:08:57 crc kubenswrapper[4811]: E0219 10:08:57.580721 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 10:08:59.580699249 +0000 UTC m=+28.107163945 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 10:08:57 crc kubenswrapper[4811]: E0219 10:08:57.580765 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 10:08:59.58074517 +0000 UTC m=+28.107209916 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.582125 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.582131 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:08:57 crc kubenswrapper[4811]: E0219 10:08:57.582232 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.582141 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:08:57 crc kubenswrapper[4811]: E0219 10:08:57.582371 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:08:57 crc kubenswrapper[4811]: E0219 10:08:57.582305 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.668242 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.668363 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.668390 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.668430 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.668480 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:57Z","lastTransitionTime":"2026-02-19T10:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.681005 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.681064 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:08:57 crc kubenswrapper[4811]: E0219 10:08:57.681223 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 10:08:57 crc kubenswrapper[4811]: E0219 10:08:57.681242 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 10:08:57 crc kubenswrapper[4811]: E0219 10:08:57.681257 4811 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 10:08:57 crc kubenswrapper[4811]: E0219 10:08:57.681326 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 10:08:59.681308008 +0000 UTC m=+28.207772694 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 10:08:57 crc kubenswrapper[4811]: E0219 10:08:57.681326 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 10:08:57 crc kubenswrapper[4811]: E0219 10:08:57.681401 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 10:08:57 crc kubenswrapper[4811]: E0219 10:08:57.681435 4811 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 10:08:57 crc kubenswrapper[4811]: E0219 10:08:57.681612 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 10:08:59.681576864 +0000 UTC m=+28.208041580 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.771189 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.771249 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.771260 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.771280 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.771292 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:57Z","lastTransitionTime":"2026-02-19T10:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.873702 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.873763 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.873779 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.873804 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.873824 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:57Z","lastTransitionTime":"2026-02-19T10:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.969173 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" event={"ID":"e2e42c80-bece-4066-abad-05bc661d33f5","Type":"ContainerStarted","Data":"9ac23aaef2d47d83268a51b9efb514cc10a9fdb47d2304f5cabf31c2ae46abfd"} Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.972057 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" event={"ID":"554fedad-397a-4827-b5ad-48a681a7a2e5","Type":"ContainerStarted","Data":"36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45"} Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.972111 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" event={"ID":"554fedad-397a-4827-b5ad-48a681a7a2e5","Type":"ContainerStarted","Data":"f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb"} Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.972128 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" event={"ID":"554fedad-397a-4827-b5ad-48a681a7a2e5","Type":"ContainerStarted","Data":"85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c"} Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.972140 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" event={"ID":"554fedad-397a-4827-b5ad-48a681a7a2e5","Type":"ContainerStarted","Data":"6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4"} Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.972152 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" event={"ID":"554fedad-397a-4827-b5ad-48a681a7a2e5","Type":"ContainerStarted","Data":"67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e"} Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.972164 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" event={"ID":"554fedad-397a-4827-b5ad-48a681a7a2e5","Type":"ContainerStarted","Data":"99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860"} Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.973457 4811 generic.go:334] "Generic (PLEG): container finished" podID="98110a48-08c0-4227-93dc-cfadaa45b044" containerID="172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541" exitCode=0 Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.973486 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" event={"ID":"98110a48-08c0-4227-93dc-cfadaa45b044","Type":"ContainerDied","Data":"172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541"} Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.977276 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bpzh7" event={"ID":"221c25e2-b834-4fe6-83e4-714328c60b4e","Type":"ContainerStarted","Data":"3788443dd8b315ed6c0e0f94f9ec1524525bde8a1bfdab4096a4af6b17155977"} Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.978201 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.978238 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.978257 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.978283 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.978302 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:57Z","lastTransitionTime":"2026-02-19T10:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.979552 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.981919 4811 scope.go:117] "RemoveContainer" containerID="5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a" Feb 19 10:08:57 crc kubenswrapper[4811]: E0219 10:08:57.982093 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 19 10:08:57 crc kubenswrapper[4811]: I0219 10:08:57.988308 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:08:57Z is after 2025-08-24T17:21:41Z" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.003193 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8ede5032c6812f2b019538d253d2c07cf1659f30fcc4c52b27e6a091fb0758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c005a321618306472457a36ad24c5550bd0d2320bd395d6c022a84a7fdebbb01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:08:58Z is after 2025-08-24T17:21:41Z" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.027149 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:08:58Z is after 2025-08-24T17:21:41Z" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.042261 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e42c80-bece-4066-abad-05bc661d33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac23aaef2d47d83268a51b9efb514cc10a9fdb47d2304f5cabf31c2ae46abfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v97zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:08:58Z is after 2025-08-24T17:21:41Z" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.061001 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c18ee97d5e19f635541477ce6c6eb5fe1417551582bb34c9e0ec77a134d660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:08:58Z is after 2025-08-24T17:21:41Z" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.085531 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.085576 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.085585 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.085604 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.085619 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:58Z","lastTransitionTime":"2026-02-19T10:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.086582 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:08:58Z is after 2025-08-24T17:21:41Z" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.107839 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxbrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trc25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxbrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:08:58Z is after 2025-08-24T17:21:41Z" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.138803 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:08:58Z is after 2025-08-24T17:21:41Z" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.164766 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98110a48-08c0-4227-93dc-cfadaa45b044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9s768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:08:58Z is after 2025-08-24T17:21:41Z" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.188012 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.188060 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.188071 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.188093 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.188108 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:58Z","lastTransitionTime":"2026-02-19T10:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.193430 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"554fedad-397a-4827-b5ad-48a681a7a2e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ln785\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:08:58Z is after 2025-08-24T17:21:41Z" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.215345 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://097b65ba9cbb94e97a9104f4bbaff80f560a22e0673d99158b5f86073aed8f9d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:37Z\\\",\\\"message\\\":\\\"W0219 10:08:36.479613 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 10:08:36.480107 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771495716 cert, and key in /tmp/serving-cert-2100221586/serving-signer.crt, /tmp/serving-cert-2100221586/serving-signer.key\\\\nI0219 10:08:37.047998 1 observer_polling.go:159] Starting file observer\\\\nW0219 10:08:37.051104 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 10:08:37.051351 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 10:08:37.052877 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2100221586/tls.crt::/tmp/serving-cert-2100221586/tls.key\\\\\\\"\\\\nF0219 10:08:37.563687 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI0219 10:08:54.897941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0219 10:08:54.897951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0219 10:08:54.898073 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771495718\\\\\\\\\\\\\\\" (2026-02-19 10:08:37 +0000 UTC to 2026-03-21 10:08:38 +0000 UTC (now=2026-02-19 10:08:54.898015179 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898163 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\"\\\\nI0219 10:08:54.898747 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771495728\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771495728\\\\\\\\\\\\\\\" (2026-02-19 09:08:48 +0000 UTC to 2027-02-19 09:08:48 +0000 UTC (now=2026-02-19 10:08:54.898701245 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898777 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0219 10:08:54.898804 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0219 10:08:54.898823 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 10:08:54.900253 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900328 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900654 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0219 10:08:54.907320 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:08:58Z is after 2025-08-24T17:21:41Z" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.229285 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:08:58Z is after 2025-08-24T17:21:41Z" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.245719 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:08:58Z is after 2025-08-24T17:21:41Z" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.259787 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:08:58Z is after 2025-08-24T17:21:41Z" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.276866 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e42c80-bece-4066-abad-05bc661d33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac23aaef2d47d83268a51b9efb514cc10a9fdb47d2304f5cabf31c2ae46abfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v97zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:08:58Z is after 2025-08-24T17:21:41Z" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.294525 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.294564 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.294574 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.294589 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.294598 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:58Z","lastTransitionTime":"2026-02-19T10:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.295504 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c18ee97d5e19f635541477ce6c6eb5fe1417551582bb34c9e0ec77a134d660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:08:58Z is after 2025-08-24T17:21:41Z" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.329461 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:08:58Z is after 2025-08-24T17:21:41Z" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.340027 4811 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-19 10:03:57 +0000 UTC, rotation deadline is 2026-12-03 06:01:53.772560664 +0000 UTC Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.340065 4811 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6883h52m55.432497949s for next certificate rotation Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.343576 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8ede5032c6812f2b019538d253d2c07cf1659f30fcc4c52b27e6a091fb0758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c005a321618306472457a36ad24c5550bd0d2320bd395d6c022a84a7fdebbb01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:08:58Z is after 2025-08-24T17:21:41Z" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.356269 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:08:58Z is after 2025-08-24T17:21:41Z" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.369860 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxbrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trc25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxbrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:08:58Z is after 2025-08-24T17:21:41Z" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.391337 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"554fedad-397a-4827-b5ad-48a681a7a2e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ln785\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:08:58Z is after 2025-08-24T17:21:41Z" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.397138 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.397189 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.397200 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.397219 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.397230 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:58Z","lastTransitionTime":"2026-02-19T10:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.418370 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:08:58Z is after 2025-08-24T17:21:41Z" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.432153 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98110a48-08c0-4227-93dc-cfadaa45b044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9s768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:08:58Z is after 2025-08-24T17:21:41Z" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.444563 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3788443dd8b315ed6c0e0f94f9ec1524525bde8a1bfdab4096a4af6b17155977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:08:58Z is after 2025-08-24T17:21:41Z" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.457688 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:08:58Z is after 2025-08-24T17:21:41Z" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.469915 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI0219 10:08:54.897941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0219 10:08:54.897951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0219 10:08:54.898073 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771495718\\\\\\\\\\\\\\\" (2026-02-19 10:08:37 +0000 UTC to 2026-03-21 10:08:38 +0000 UTC (now=2026-02-19 10:08:54.898015179 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898163 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\"\\\\nI0219 10:08:54.898747 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771495728\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771495728\\\\\\\\\\\\\\\" (2026-02-19 09:08:48 +0000 UTC to 2027-02-19 09:08:48 +0000 UTC (now=2026-02-19 10:08:54.898701245 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898777 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0219 10:08:54.898804 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0219 10:08:54.898823 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 10:08:54.900253 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900328 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900654 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0219 10:08:54.907320 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:08:58Z is after 2025-08-24T17:21:41Z" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.499825 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.499862 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.499874 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.499893 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.499905 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:58Z","lastTransitionTime":"2026-02-19T10:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.523066 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 16:14:35.746561915 +0000 UTC Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.602382 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.602459 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.602475 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.602496 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.602508 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:58Z","lastTransitionTime":"2026-02-19T10:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.706035 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.706072 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.706082 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.706100 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.706110 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:58Z","lastTransitionTime":"2026-02-19T10:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.807920 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.807952 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.807976 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.808010 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.808020 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:58Z","lastTransitionTime":"2026-02-19T10:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.845586 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.909945 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.910009 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.910019 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.910041 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.910088 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:58Z","lastTransitionTime":"2026-02-19T10:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.986304 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" event={"ID":"98110a48-08c0-4227-93dc-cfadaa45b044","Type":"ContainerStarted","Data":"a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5"} Feb 19 10:08:58 crc kubenswrapper[4811]: I0219 10:08:58.987698 4811 scope.go:117] "RemoveContainer" containerID="5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a" Feb 19 10:08:58 crc kubenswrapper[4811]: E0219 10:08:58.987877 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.005737 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c18ee97d5e19f635541477ce6c6eb5fe1417551582bb34c9e0ec77a134d660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:08:59Z is after 2025-08-24T17:21:41Z" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.012820 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.013005 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.013114 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.013227 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.013313 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:59Z","lastTransitionTime":"2026-02-19T10:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.020924 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:08:59Z is after 2025-08-24T17:21:41Z" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.038800 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8ede5032c6812f2b019538d253d2c07cf1659f30fcc4c52b27e6a091fb0758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c005a321618306472457a36ad24c5550bd0d2320bd395d6c022a84a7fdebbb01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:08:59Z is after 2025-08-24T17:21:41Z" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.062008 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:08:59Z is after 2025-08-24T17:21:41Z" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.074064 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e42c80-bece-4066-abad-05bc661d33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac23aaef2d47d83268a51b9efb514cc10a9fdb47d2304f5cabf31c2ae46abfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v97zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:08:59Z is after 2025-08-24T17:21:41Z" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.086124 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:08:59Z is after 2025-08-24T17:21:41Z" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.097415 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxbrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trc25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxbrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:08:59Z is after 2025-08-24T17:21:41Z" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.110196 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:08:59Z is after 2025-08-24T17:21:41Z" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.117801 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.117843 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.117853 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.117869 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.117882 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:59Z","lastTransitionTime":"2026-02-19T10:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.128399 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98110a48-08c0-4227-93dc-cfadaa45b044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9s768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:08:59Z is after 2025-08-24T17:21:41Z" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.147427 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"554fedad-397a-4827-b5ad-48a681a7a2e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ln785\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:08:59Z is after 2025-08-24T17:21:41Z" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.164189 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:08:59Z is after 2025-08-24T17:21:41Z" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.181793 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI0219 10:08:54.897941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0219 10:08:54.897951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0219 10:08:54.898073 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771495718\\\\\\\\\\\\\\\" (2026-02-19 10:08:37 +0000 UTC to 2026-03-21 10:08:38 +0000 UTC (now=2026-02-19 10:08:54.898015179 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898163 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\"\\\\nI0219 10:08:54.898747 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771495728\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771495728\\\\\\\\\\\\\\\" (2026-02-19 09:08:48 +0000 UTC to 2027-02-19 09:08:48 +0000 UTC (now=2026-02-19 10:08:54.898701245 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898777 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0219 10:08:54.898804 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0219 10:08:54.898823 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 10:08:54.900253 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900328 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900654 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0219 10:08:54.907320 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:08:59Z is after 2025-08-24T17:21:41Z" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.194095 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3788443dd8b315ed6c0e0f94f9ec1524525bde8a1bfdab4096a4af6b17155977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:08:59Z is after 2025-08-24T17:21:41Z" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.220789 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.220853 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.220878 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.220900 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.220912 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:59Z","lastTransitionTime":"2026-02-19T10:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.323917 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.323961 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.323971 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.323990 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.324000 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:59Z","lastTransitionTime":"2026-02-19T10:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.426643 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.426698 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.426711 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.426732 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.426746 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:59Z","lastTransitionTime":"2026-02-19T10:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.523632 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 08:18:11.121474212 +0000 UTC Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.529101 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.529144 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.529154 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.529174 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.529185 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:59Z","lastTransitionTime":"2026-02-19T10:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.582293 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.582345 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.582380 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:08:59 crc kubenswrapper[4811]: E0219 10:08:59.582567 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:08:59 crc kubenswrapper[4811]: E0219 10:08:59.582670 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:08:59 crc kubenswrapper[4811]: E0219 10:08:59.582750 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.604602 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.604749 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.604785 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:08:59 crc kubenswrapper[4811]: E0219 10:08:59.604919 4811 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 10:08:59 crc kubenswrapper[4811]: E0219 10:08:59.604976 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 10:09:03.604960798 +0000 UTC m=+32.131425484 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 10:08:59 crc kubenswrapper[4811]: E0219 10:08:59.605040 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:09:03.605031119 +0000 UTC m=+32.131495805 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:08:59 crc kubenswrapper[4811]: E0219 10:08:59.605080 4811 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 10:08:59 crc kubenswrapper[4811]: E0219 10:08:59.605107 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 10:09:03.605100451 +0000 UTC m=+32.131565137 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.631828 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.631881 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.631894 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.631915 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.631930 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:59Z","lastTransitionTime":"2026-02-19T10:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.705206 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.705291 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:08:59 crc kubenswrapper[4811]: E0219 10:08:59.705424 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 10:08:59 crc kubenswrapper[4811]: E0219 10:08:59.705469 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 10:08:59 crc kubenswrapper[4811]: E0219 10:08:59.705482 4811 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 10:08:59 crc kubenswrapper[4811]: E0219 10:08:59.705524 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 10:09:03.705510015 +0000 UTC m=+32.231974701 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 10:08:59 crc kubenswrapper[4811]: E0219 10:08:59.705586 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 10:08:59 crc kubenswrapper[4811]: E0219 10:08:59.705641 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 10:08:59 crc kubenswrapper[4811]: E0219 10:08:59.705660 4811 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 10:08:59 crc kubenswrapper[4811]: E0219 10:08:59.705737 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 10:09:03.70571386 +0000 UTC m=+32.232178586 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.734898 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.734982 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.735007 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.735045 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.735064 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:59Z","lastTransitionTime":"2026-02-19T10:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.837560 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.837596 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.837607 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.837623 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.837633 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:59Z","lastTransitionTime":"2026-02-19T10:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.940074 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.940126 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.940136 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.940158 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.940171 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:08:59Z","lastTransitionTime":"2026-02-19T10:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.990645 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"9e102b35eadbc15e015f695e0f25f51576d5a023de5326f37421223e711ac407"} Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.995217 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" event={"ID":"554fedad-397a-4827-b5ad-48a681a7a2e5","Type":"ContainerStarted","Data":"1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b"} Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.998189 4811 generic.go:334] "Generic (PLEG): container finished" podID="98110a48-08c0-4227-93dc-cfadaa45b044" containerID="a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5" exitCode=0 Feb 19 10:08:59 crc kubenswrapper[4811]: I0219 10:08:59.998409 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" event={"ID":"98110a48-08c0-4227-93dc-cfadaa45b044","Type":"ContainerDied","Data":"a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5"} Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.001037 4811 scope.go:117] "RemoveContainer" containerID="5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a" Feb 19 10:09:00 crc kubenswrapper[4811]: E0219 10:09:00.005791 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.009552 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:00Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.025724 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxbrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trc25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxbrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:00Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.039081 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:00Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.043499 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.043948 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.044422 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.045794 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.045890 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:00Z","lastTransitionTime":"2026-02-19T10:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.055130 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98110a48-08c0-4227-93dc-cfadaa45b044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9s768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:00Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.075415 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"554fedad-397a-4827-b5ad-48a681a7a2e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ln785\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:00Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.090025 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:00Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.106028 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI0219 10:08:54.897941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0219 10:08:54.897951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0219 10:08:54.898073 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771495718\\\\\\\\\\\\\\\" (2026-02-19 10:08:37 +0000 UTC to 2026-03-21 10:08:38 +0000 UTC (now=2026-02-19 10:08:54.898015179 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898163 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\"\\\\nI0219 10:08:54.898747 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771495728\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771495728\\\\\\\\\\\\\\\" (2026-02-19 09:08:48 +0000 UTC to 2027-02-19 09:08:48 +0000 UTC (now=2026-02-19 10:08:54.898701245 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898777 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0219 10:08:54.898804 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0219 10:08:54.898823 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 10:08:54.900253 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900328 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900654 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0219 10:08:54.907320 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:00Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.118702 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3788443dd8b315ed6c0e0f94f9ec1524525bde8a1bfdab4096a4af6b17155977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:00Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.135503 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c18ee97d5e19f635541477ce6c6eb5fe1417551582bb34c9e0ec77a134d660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:00Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.148211 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.148262 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.148271 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.148293 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.148307 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:00Z","lastTransitionTime":"2026-02-19T10:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.150311 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:00Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.164106 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8ede5032c6812f2b019538d253d2c07cf1659f30fcc4c52b27e6a091fb0758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c005a321618306472457a36ad24c5550bd0d2320bd395d6c022a84a7fdebbb01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:00Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.180882 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e102b35eadbc15e015f695e0f25f51576d5a023de5326f37421223e711ac407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:00Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.194200 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e42c80-bece-4066-abad-05bc661d33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac23aaef2d47d83268a51b9efb514cc10a9fdb47d2304f5cabf31c2ae46abfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v97zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:00Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.208789 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8ede5032c6812f2b019538d253d2c07cf1659f30fcc4c52b27e6a091fb0758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c005a321618306472457a36ad24c5550bd0d2320bd395d6c022a84a7fdebbb01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:00Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.222570 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e102b35eadbc15e015f695e0f25f51576d5a023de5326f37421223e711ac407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:00Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.233570 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e42c80-bece-4066-abad-05bc661d33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac23aaef2d47d83268a51b9efb514cc10a9fdb47d2304f5cabf31c2ae46abfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v97zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:00Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.251909 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c18ee97d5e19f635541477ce6c6eb5fe1417551582bb34c9e0ec77a134d660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:00Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.253763 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.253791 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.253802 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.253821 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.253834 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:00Z","lastTransitionTime":"2026-02-19T10:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.265546 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:00Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.280720 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxbrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trc25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxbrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:00Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.295260 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:00Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.310705 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98110a48-08c0-4227-93dc-cfadaa45b044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9s768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:00Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.331851 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"554fedad-397a-4827-b5ad-48a681a7a2e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ln785\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:00Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.349621 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:00Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.357298 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.357350 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.357361 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.357383 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.357400 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:00Z","lastTransitionTime":"2026-02-19T10:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.364174 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3788443dd8b315ed6c0e0f94f9ec1524525bde8a1bfdab4096a4af6b17155977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:00Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.379854 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:00Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.404614 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI0219 10:08:54.897941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0219 10:08:54.897951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0219 10:08:54.898073 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771495718\\\\\\\\\\\\\\\" (2026-02-19 10:08:37 +0000 UTC to 2026-03-21 10:08:38 +0000 UTC (now=2026-02-19 10:08:54.898015179 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898163 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\"\\\\nI0219 10:08:54.898747 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771495728\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771495728\\\\\\\\\\\\\\\" (2026-02-19 09:08:48 +0000 UTC to 2027-02-19 09:08:48 +0000 UTC (now=2026-02-19 10:08:54.898701245 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898777 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0219 10:08:54.898804 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0219 10:08:54.898823 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 10:08:54.900253 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900328 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900654 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0219 10:08:54.907320 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:00Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.460101 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.460149 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.460158 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.460180 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.460193 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:00Z","lastTransitionTime":"2026-02-19T10:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.523979 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 03:08:44.313544932 +0000 UTC Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.578863 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.578936 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.578955 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.578998 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.579021 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:00Z","lastTransitionTime":"2026-02-19T10:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.683861 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.683901 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.683912 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.683928 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.683939 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:00Z","lastTransitionTime":"2026-02-19T10:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.785927 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.785958 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.785967 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.785980 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.785988 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:00Z","lastTransitionTime":"2026-02-19T10:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.888716 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.888753 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.888766 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.888783 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.888794 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:00Z","lastTransitionTime":"2026-02-19T10:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.991598 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.991790 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.991816 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.991839 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:00 crc kubenswrapper[4811]: I0219 10:09:00.991886 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:00Z","lastTransitionTime":"2026-02-19T10:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.006185 4811 generic.go:334] "Generic (PLEG): container finished" podID="98110a48-08c0-4227-93dc-cfadaa45b044" containerID="9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16" exitCode=0 Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.006287 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" event={"ID":"98110a48-08c0-4227-93dc-cfadaa45b044","Type":"ContainerDied","Data":"9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16"} Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.023960 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:01Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.042028 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI0219 10:08:54.897941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0219 10:08:54.897951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0219 10:08:54.898073 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771495718\\\\\\\\\\\\\\\" (2026-02-19 10:08:37 +0000 UTC to 2026-03-21 10:08:38 +0000 UTC (now=2026-02-19 10:08:54.898015179 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898163 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\"\\\\nI0219 10:08:54.898747 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771495728\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771495728\\\\\\\\\\\\\\\" (2026-02-19 09:08:48 +0000 UTC to 2027-02-19 09:08:48 +0000 UTC (now=2026-02-19 10:08:54.898701245 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898777 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0219 10:08:54.898804 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0219 10:08:54.898823 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 10:08:54.900253 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900328 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900654 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0219 10:08:54.907320 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:01Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.054307 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3788443dd8b315ed6c0e0f94f9ec1524525bde8a1bfdab4096a4af6b17155977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:01Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.071548 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c18ee97d5e19f635541477ce6c6eb5fe1417551582bb34c9e0ec77a134d660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:01Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.085855 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:01Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.094640 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.094715 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.094736 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.094767 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.094786 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:01Z","lastTransitionTime":"2026-02-19T10:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.104498 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8ede5032c6812f2b019538d253d2c07cf1659f30fcc4c52b27e6a091fb0758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c005a321618306472457a36ad24c5550bd0d2320bd395d6c022a84a7fdebbb01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:01Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.124031 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e102b35eadbc15e015f695e0f25f51576d5a023de5326f37421223e711ac407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:01Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.135967 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e42c80-bece-4066-abad-05bc661d33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac23aaef2d47d83268a51b9efb514cc10a9fdb47d2304f5cabf31c2ae46abfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v97zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:01Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.151798 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:01Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.170423 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxbrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trc25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxbrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:01Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.186233 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:01Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.197366 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.197558 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.197652 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.197741 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.197822 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:01Z","lastTransitionTime":"2026-02-19T10:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.202921 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98110a48-08c0-4227-93dc-cfadaa45b044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9s768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:01Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.229883 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"554fedad-397a-4827-b5ad-48a681a7a2e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ln785\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:01Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.300977 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.301324 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.301406 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.301522 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.301602 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:01Z","lastTransitionTime":"2026-02-19T10:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.405339 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.405800 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.405887 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.405976 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.406066 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:01Z","lastTransitionTime":"2026-02-19T10:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.508866 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.508910 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.508922 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.508938 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.508950 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:01Z","lastTransitionTime":"2026-02-19T10:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.525016 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 15:58:34.721163897 +0000 UTC Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.582638 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:09:01 crc kubenswrapper[4811]: E0219 10:09:01.582782 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.583165 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.583193 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:09:01 crc kubenswrapper[4811]: E0219 10:09:01.583758 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:09:01 crc kubenswrapper[4811]: E0219 10:09:01.583909 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.611248 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.611623 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.611776 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.612006 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.612203 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:01Z","lastTransitionTime":"2026-02-19T10:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.715080 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.715115 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.715125 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.715140 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.715149 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:01Z","lastTransitionTime":"2026-02-19T10:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.817281 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.817596 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.817681 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.817788 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.817879 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:01Z","lastTransitionTime":"2026-02-19T10:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.920309 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.920353 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.920366 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.920383 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:01 crc kubenswrapper[4811]: I0219 10:09:01.920394 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:01Z","lastTransitionTime":"2026-02-19T10:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.012441 4811 generic.go:334] "Generic (PLEG): container finished" podID="98110a48-08c0-4227-93dc-cfadaa45b044" containerID="3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1" exitCode=0 Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.012680 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" event={"ID":"98110a48-08c0-4227-93dc-cfadaa45b044","Type":"ContainerDied","Data":"3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1"} Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.022627 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.022738 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.022802 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.022832 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.022902 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:02Z","lastTransitionTime":"2026-02-19T10:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.028110 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.043760 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI0219 10:08:54.897941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0219 10:08:54.897951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0219 10:08:54.898073 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771495718\\\\\\\\\\\\\\\" (2026-02-19 10:08:37 +0000 UTC to 2026-03-21 10:08:38 +0000 UTC (now=2026-02-19 10:08:54.898015179 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898163 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\"\\\\nI0219 10:08:54.898747 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771495728\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771495728\\\\\\\\\\\\\\\" (2026-02-19 09:08:48 +0000 UTC to 2027-02-19 09:08:48 +0000 UTC (now=2026-02-19 10:08:54.898701245 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898777 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0219 10:08:54.898804 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0219 10:08:54.898823 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 10:08:54.900253 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900328 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900654 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0219 10:08:54.907320 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.056819 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3788443dd8b315ed6c0e0f94f9ec1524525bde8a1bfdab4096a4af6b17155977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.072633 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c18ee97d5e19f635541477ce6c6eb5fe1417551582bb34c9e0ec77a134d660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.090309 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.104580 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8ede5032c6812f2b019538d253d2c07cf1659f30fcc4c52b27e6a091fb0758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c005a321618306472457a36ad24c5550bd0d2320bd395d6c022a84a7fdebbb01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.117872 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e102b35eadbc15e015f695e0f25f51576d5a023de5326f37421223e711ac407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.126385 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.126434 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.126461 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.126486 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.126501 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:02Z","lastTransitionTime":"2026-02-19T10:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.131193 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e42c80-bece-4066-abad-05bc661d33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac23aaef2d47d83268a51b9efb514cc10a9fdb47d2304f5cabf31c2ae46abfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v97zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.145826 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.160775 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxbrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trc25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxbrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.173652 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.187737 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98110a48-08c0-4227-93dc-cfadaa45b044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9s768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.206559 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"554fedad-397a-4827-b5ad-48a681a7a2e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ln785\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.228834 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.228863 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.228871 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.228883 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.228892 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:02Z","lastTransitionTime":"2026-02-19T10:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.286788 4811 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.332601 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.332630 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.332640 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.332655 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.332665 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:02Z","lastTransitionTime":"2026-02-19T10:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.434783 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.434823 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.434837 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.434857 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.434871 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:02Z","lastTransitionTime":"2026-02-19T10:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.526130 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 18:32:07.741324422 +0000 UTC Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.538065 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.538117 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.538133 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.538155 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.538169 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:02Z","lastTransitionTime":"2026-02-19T10:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.597309 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.611252 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-nw57r"] Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.611800 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-nw57r" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.613557 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI0219 10:08:54.897941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0219 10:08:54.897951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0219 10:08:54.898073 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771495718\\\\\\\\\\\\\\\" (2026-02-19 10:08:37 +0000 UTC to 2026-03-21 10:08:38 +0000 UTC (now=2026-02-19 10:08:54.898015179 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898163 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\"\\\\nI0219 10:08:54.898747 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771495728\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771495728\\\\\\\\\\\\\\\" (2026-02-19 09:08:48 +0000 UTC to 2027-02-19 09:08:48 +0000 UTC (now=2026-02-19 10:08:54.898701245 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898777 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0219 10:08:54.898804 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0219 10:08:54.898823 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 10:08:54.900253 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900328 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900654 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0219 10:08:54.907320 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.614526 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.614617 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.614536 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.616077 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.631566 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3788443dd8b315ed6c0e0f94f9ec1524525bde8a1bfdab4096a4af6b17155977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.638967 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5phq\" (UniqueName: \"kubernetes.io/projected/5177375f-9a2c-4430-8c2e-307ddf187cb7-kube-api-access-d5phq\") pod \"node-ca-nw57r\" (UID: \"5177375f-9a2c-4430-8c2e-307ddf187cb7\") " pod="openshift-image-registry/node-ca-nw57r" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.639027 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5177375f-9a2c-4430-8c2e-307ddf187cb7-serviceca\") pod \"node-ca-nw57r\" (UID: \"5177375f-9a2c-4430-8c2e-307ddf187cb7\") " pod="openshift-image-registry/node-ca-nw57r" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.639052 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5177375f-9a2c-4430-8c2e-307ddf187cb7-host\") pod \"node-ca-nw57r\" (UID: \"5177375f-9a2c-4430-8c2e-307ddf187cb7\") " pod="openshift-image-registry/node-ca-nw57r" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.641147 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.641232 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.641257 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.641291 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.641313 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:02Z","lastTransitionTime":"2026-02-19T10:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.662734 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c18ee97d5e19f635541477ce6c6eb5fe1417551582bb34c9e0ec77a134d660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.678658 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.697504 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8ede5032c6812f2b019538d253d2c07cf1659f30fcc4c52b27e6a091fb0758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c005a321618306472457a36ad24c5550bd0d2320bd395d6c022a84a7fdebbb01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.713236 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e102b35eadbc15e015f695e0f25f51576d5a023de5326f37421223e711ac407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.734237 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e42c80-bece-4066-abad-05bc661d33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac23aaef2d47d83268a51b9efb514cc10a9fdb47d2304f5cabf31c2ae46abfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v97zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.740608 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5phq\" (UniqueName: \"kubernetes.io/projected/5177375f-9a2c-4430-8c2e-307ddf187cb7-kube-api-access-d5phq\") pod \"node-ca-nw57r\" (UID: \"5177375f-9a2c-4430-8c2e-307ddf187cb7\") " pod="openshift-image-registry/node-ca-nw57r" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.740691 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5177375f-9a2c-4430-8c2e-307ddf187cb7-serviceca\") pod \"node-ca-nw57r\" (UID: \"5177375f-9a2c-4430-8c2e-307ddf187cb7\") " pod="openshift-image-registry/node-ca-nw57r" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.740726 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5177375f-9a2c-4430-8c2e-307ddf187cb7-host\") pod \"node-ca-nw57r\" (UID: \"5177375f-9a2c-4430-8c2e-307ddf187cb7\") " pod="openshift-image-registry/node-ca-nw57r" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.740820 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5177375f-9a2c-4430-8c2e-307ddf187cb7-host\") pod \"node-ca-nw57r\" (UID: \"5177375f-9a2c-4430-8c2e-307ddf187cb7\") " pod="openshift-image-registry/node-ca-nw57r" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.742069 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5177375f-9a2c-4430-8c2e-307ddf187cb7-serviceca\") pod \"node-ca-nw57r\" (UID: \"5177375f-9a2c-4430-8c2e-307ddf187cb7\") " pod="openshift-image-registry/node-ca-nw57r" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.745601 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.745640 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.745650 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.745665 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.745675 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:02Z","lastTransitionTime":"2026-02-19T10:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.747267 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.757596 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5phq\" (UniqueName: \"kubernetes.io/projected/5177375f-9a2c-4430-8c2e-307ddf187cb7-kube-api-access-d5phq\") pod \"node-ca-nw57r\" (UID: \"5177375f-9a2c-4430-8c2e-307ddf187cb7\") " pod="openshift-image-registry/node-ca-nw57r" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.762775 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxbrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trc25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxbrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.786982 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.811877 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98110a48-08c0-4227-93dc-cfadaa45b044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9s768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.834744 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"554fedad-397a-4827-b5ad-48a681a7a2e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ln785\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.849027 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.849083 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.849095 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.849120 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.849138 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:02Z","lastTransitionTime":"2026-02-19T10:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.852093 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.875595 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98110a48-08c0-4227-93dc-cfadaa45b044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9s768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.899381 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"554fedad-397a-4827-b5ad-48a681a7a2e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ln785\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.918714 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.928326 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-nw57r" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.936769 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI0219 10:08:54.897941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0219 10:08:54.897951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0219 10:08:54.898073 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771495718\\\\\\\\\\\\\\\" (2026-02-19 10:08:37 +0000 UTC to 2026-03-21 10:08:38 +0000 UTC (now=2026-02-19 10:08:54.898015179 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898163 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\"\\\\nI0219 10:08:54.898747 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771495728\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771495728\\\\\\\\\\\\\\\" (2026-02-19 09:08:48 +0000 UTC to 2027-02-19 09:08:48 +0000 UTC (now=2026-02-19 10:08:54.898701245 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898777 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0219 10:08:54.898804 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0219 10:08:54.898823 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 10:08:54.900253 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900328 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900654 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0219 10:08:54.907320 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.954351 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.954900 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.954917 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.954943 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.954962 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:02Z","lastTransitionTime":"2026-02-19T10:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.958382 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3788443dd8b315ed6c0e0f94f9ec1524525bde8a1bfdab4096a4af6b17155977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.975222 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nw57r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5177375f-9a2c-4430-8c2e-307ddf187cb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5phq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nw57r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:02 crc kubenswrapper[4811]: I0219 10:09:02.989013 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c18ee97d5e19f635541477ce6c6eb5fe1417551582bb34c9e0ec77a134d660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.002354 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:03Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.019471 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8ede5032c6812f2b019538d253d2c07cf1659f30fcc4c52b27e6a091fb0758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c005a321618306472457a36ad24c5550bd0d2320bd395d6c022a84a7fdebbb01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:03Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.020574 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" event={"ID":"98110a48-08c0-4227-93dc-cfadaa45b044","Type":"ContainerStarted","Data":"c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734"} Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.030685 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" event={"ID":"554fedad-397a-4827-b5ad-48a681a7a2e5","Type":"ContainerStarted","Data":"6a3eac356f8c4a4e297fee6105ed49f0993737bc216516d008365d5cdb4bf010"} Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.031208 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.031758 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-nw57r" event={"ID":"5177375f-9a2c-4430-8c2e-307ddf187cb7","Type":"ContainerStarted","Data":"fd2a8f9d1440eb467b4ede27cd7f9deafe103f0b0ba7f3893a0e1d09597d37ed"} Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.037053 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e102b35eadbc15e015f695e0f25f51576d5a023de5326f37421223e711ac407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:03Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.051218 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e42c80-bece-4066-abad-05bc661d33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac23aaef2d47d83268a51b9efb514cc10a9fdb47d2304f5cabf31c2ae46abfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v97zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:03Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.058095 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.058127 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.058136 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.058151 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.058161 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:03Z","lastTransitionTime":"2026-02-19T10:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.066832 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:03Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.074804 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.084242 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxbrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trc25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxbrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:03Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.102142 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:03Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.121407 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI0219 10:08:54.897941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0219 10:08:54.897951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0219 10:08:54.898073 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771495718\\\\\\\\\\\\\\\" (2026-02-19 10:08:37 +0000 UTC to 2026-03-21 10:08:38 +0000 UTC (now=2026-02-19 10:08:54.898015179 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898163 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\"\\\\nI0219 10:08:54.898747 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771495728\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771495728\\\\\\\\\\\\\\\" (2026-02-19 09:08:48 +0000 UTC to 2027-02-19 09:08:48 +0000 UTC (now=2026-02-19 10:08:54.898701245 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898777 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0219 10:08:54.898804 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0219 10:08:54.898823 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 10:08:54.900253 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900328 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900654 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0219 10:08:54.907320 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:03Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.135824 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3788443dd8b315ed6c0e0f94f9ec1524525bde8a1bfdab4096a4af6b17155977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:03Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.148942 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nw57r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5177375f-9a2c-4430-8c2e-307ddf187cb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5phq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nw57r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:03Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.164477 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c18ee97d5e19f635541477ce6c6eb5fe1417551582bb34c9e0ec77a134d660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:03Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.165444 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.165508 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.165522 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.165547 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.165562 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:03Z","lastTransitionTime":"2026-02-19T10:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.181364 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:03Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.198988 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8ede5032c6812f2b019538d253d2c07cf1659f30fcc4c52b27e6a091fb0758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c005a321618306472457a36ad24c5550bd0d2320bd395d6c022a84a7fdebbb01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:03Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.214586 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e102b35eadbc15e015f695e0f25f51576d5a023de5326f37421223e711ac407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:03Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.228153 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e42c80-bece-4066-abad-05bc661d33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac23aaef2d47d83268a51b9efb514cc10a9fdb47d2304f5cabf31c2ae46abfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v97zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:03Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.251781 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:03Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.268140 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxbrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trc25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxbrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:03Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.269008 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.269048 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.269060 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.269077 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.269088 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:03Z","lastTransitionTime":"2026-02-19T10:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.290896 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:03Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.315400 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98110a48-08c0-4227-93dc-cfadaa45b044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9s768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:03Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.371887 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.371927 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.371939 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.371959 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.371974 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:03Z","lastTransitionTime":"2026-02-19T10:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.373026 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"554fedad-397a-4827-b5ad-48a681a7a2e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a3eac356f8c4a4e297fee6105ed49f0993737bc216516d008365d5cdb4bf010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ln785\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:03Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.475603 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.475643 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.475653 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.475667 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.475677 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:03Z","lastTransitionTime":"2026-02-19T10:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.527063 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 21:33:44.778509016 +0000 UTC Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.579102 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.579172 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.579184 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.579205 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.579219 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:03Z","lastTransitionTime":"2026-02-19T10:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.582396 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.582396 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.582586 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:09:03 crc kubenswrapper[4811]: E0219 10:09:03.582705 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:09:03 crc kubenswrapper[4811]: E0219 10:09:03.582860 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:09:03 crc kubenswrapper[4811]: E0219 10:09:03.582970 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.652902 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.653053 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:09:03 crc kubenswrapper[4811]: E0219 10:09:03.653099 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:09:11.653072236 +0000 UTC m=+40.179536922 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.653128 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:09:03 crc kubenswrapper[4811]: E0219 10:09:03.653156 4811 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 10:09:03 crc kubenswrapper[4811]: E0219 10:09:03.653205 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 10:09:11.653196589 +0000 UTC m=+40.179661275 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 10:09:03 crc kubenswrapper[4811]: E0219 10:09:03.653273 4811 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 10:09:03 crc kubenswrapper[4811]: E0219 10:09:03.653316 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 10:09:11.653308032 +0000 UTC m=+40.179772718 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.682793 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.682833 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.682842 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.682861 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.682873 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:03Z","lastTransitionTime":"2026-02-19T10:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.754057 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.754169 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:09:03 crc kubenswrapper[4811]: E0219 10:09:03.754352 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 10:09:03 crc kubenswrapper[4811]: E0219 10:09:03.754363 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 10:09:03 crc kubenswrapper[4811]: E0219 10:09:03.754443 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 10:09:03 crc kubenswrapper[4811]: E0219 10:09:03.754497 4811 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 10:09:03 crc kubenswrapper[4811]: E0219 10:09:03.754376 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 10:09:03 crc kubenswrapper[4811]: E0219 10:09:03.754566 4811 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 10:09:03 crc kubenswrapper[4811]: E0219 10:09:03.754592 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 10:09:11.754565726 +0000 UTC m=+40.281030402 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 10:09:03 crc kubenswrapper[4811]: E0219 10:09:03.754635 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 10:09:11.754614778 +0000 UTC m=+40.281079464 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.791990 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.792056 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.792067 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.792088 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.792102 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:03Z","lastTransitionTime":"2026-02-19T10:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.895331 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.895394 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.895411 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.895436 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.895477 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:03Z","lastTransitionTime":"2026-02-19T10:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.998981 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.999074 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.999084 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.999106 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:03 crc kubenswrapper[4811]: I0219 10:09:03.999120 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:03Z","lastTransitionTime":"2026-02-19T10:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.038126 4811 generic.go:334] "Generic (PLEG): container finished" podID="98110a48-08c0-4227-93dc-cfadaa45b044" containerID="c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734" exitCode=0 Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.038236 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" event={"ID":"98110a48-08c0-4227-93dc-cfadaa45b044","Type":"ContainerDied","Data":"c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734"} Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.040767 4811 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.040929 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-nw57r" event={"ID":"5177375f-9a2c-4430-8c2e-307ddf187cb7","Type":"ContainerStarted","Data":"bd28eb1f2ce842c3a00d30faed2acaaf4d4406342e9dc7f88595be430d0423ee"} Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.041597 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.052892 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:04Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.067200 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.072343 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98110a48-08c0-4227-93dc-cfadaa45b044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9s768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:04Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.091031 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"554fedad-397a-4827-b5ad-48a681a7a2e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a3eac356f8c4a4e297fee6105ed49f0993737bc216516d008365d5cdb4bf010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ln785\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:04Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.101412 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.101465 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.101477 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.101492 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.101504 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:04Z","lastTransitionTime":"2026-02-19T10:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.107798 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI0219 10:08:54.897941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0219 10:08:54.897951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0219 10:08:54.898073 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771495718\\\\\\\\\\\\\\\" (2026-02-19 10:08:37 +0000 UTC to 2026-03-21 10:08:38 +0000 UTC (now=2026-02-19 10:08:54.898015179 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898163 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\"\\\\nI0219 10:08:54.898747 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771495728\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771495728\\\\\\\\\\\\\\\" (2026-02-19 09:08:48 +0000 UTC to 2027-02-19 09:08:48 +0000 UTC (now=2026-02-19 10:08:54.898701245 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898777 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0219 10:08:54.898804 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0219 10:08:54.898823 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 10:08:54.900253 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900328 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900654 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0219 10:08:54.907320 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:04Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.120233 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3788443dd8b315ed6c0e0f94f9ec1524525bde8a1bfdab4096a4af6b17155977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:04Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.131509 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nw57r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5177375f-9a2c-4430-8c2e-307ddf187cb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5phq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nw57r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:04Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.143994 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:04Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.158282 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:04Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.176894 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8ede5032c6812f2b019538d253d2c07cf1659f30fcc4c52b27e6a091fb0758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c005a321618306472457a36ad24c5550bd0d2320bd395d6c022a84a7fdebbb01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:04Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.195470 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e102b35eadbc15e015f695e0f25f51576d5a023de5326f37421223e711ac407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:04Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.205960 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.206080 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.206108 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.206195 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.206263 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:04Z","lastTransitionTime":"2026-02-19T10:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.208700 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e42c80-bece-4066-abad-05bc661d33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac23aaef2d47d83268a51b9efb514cc10a9fdb47d2304f5cabf31c2ae46abfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v97zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:04Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.221465 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c18ee97d5e19f635541477ce6c6eb5fe1417551582bb34c9e0ec77a134d660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:04Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.233284 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:04Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.248359 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxbrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trc25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxbrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:04Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.262013 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e102b35eadbc15e015f695e0f25f51576d5a023de5326f37421223e711ac407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:04Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.276831 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e42c80-bece-4066-abad-05bc661d33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac23aaef2d47d83268a51b9efb514cc10a9fdb47d2304f5cabf31c2ae46abfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v97zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:04Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.290974 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c18ee97d5e19f635541477ce6c6eb5fe1417551582bb34c9e0ec77a134d660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:04Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.304987 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:04Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.311971 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.312033 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.312056 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.312085 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.312106 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:04Z","lastTransitionTime":"2026-02-19T10:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.320059 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8ede5032c6812f2b019538d253d2c07cf1659f30fcc4c52b27e6a091fb0758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c005a321618306472457a36ad24c5550bd0d2320bd395d6c022a84a7fdebbb01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:04Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.336254 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:04Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.353327 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxbrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trc25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxbrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:04Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.371037 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"554fedad-397a-4827-b5ad-48a681a7a2e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a3eac356f8c4a4e297fee6105ed49f0993737bc216516d008365d5cdb4bf010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ln785\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:04Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.383719 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:04Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.399924 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98110a48-08c0-4227-93dc-cfadaa45b044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9s768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:04Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.412674 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3788443dd8b315ed6c0e0f94f9ec1524525bde8a1bfdab4096a4af6b17155977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:04Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.418567 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.418617 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.418632 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.418653 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.418668 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:04Z","lastTransitionTime":"2026-02-19T10:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.427762 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nw57r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5177375f-9a2c-4430-8c2e-307ddf187cb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd28eb1f2ce842c3a00d30faed2acaaf4d4406342e9dc7f88595be430d0423ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5phq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nw57r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:04Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.446025 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:04Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.461778 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI0219 10:08:54.897941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0219 10:08:54.897951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0219 10:08:54.898073 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771495718\\\\\\\\\\\\\\\" (2026-02-19 10:08:37 +0000 UTC to 2026-03-21 10:08:38 +0000 UTC (now=2026-02-19 10:08:54.898015179 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898163 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\"\\\\nI0219 10:08:54.898747 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771495728\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771495728\\\\\\\\\\\\\\\" (2026-02-19 09:08:48 +0000 UTC to 2027-02-19 09:08:48 +0000 UTC (now=2026-02-19 10:08:54.898701245 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898777 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0219 10:08:54.898804 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0219 10:08:54.898823 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 10:08:54.900253 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900328 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900654 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0219 10:08:54.907320 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:04Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.521299 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.521364 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.521376 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.521397 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.521411 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:04Z","lastTransitionTime":"2026-02-19T10:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.527552 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 07:54:26.264674766 +0000 UTC Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.624251 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.624282 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.624290 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.624304 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.624313 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:04Z","lastTransitionTime":"2026-02-19T10:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.726736 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.726783 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.726794 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.726814 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.726848 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:04Z","lastTransitionTime":"2026-02-19T10:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.829289 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.829320 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.829332 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.829347 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.829358 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:04Z","lastTransitionTime":"2026-02-19T10:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.932310 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.932380 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.932397 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.932425 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:04 crc kubenswrapper[4811]: I0219 10:09:04.932472 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:04Z","lastTransitionTime":"2026-02-19T10:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.035155 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.035203 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.035216 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.035231 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.035244 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:05Z","lastTransitionTime":"2026-02-19T10:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.048978 4811 generic.go:334] "Generic (PLEG): container finished" podID="98110a48-08c0-4227-93dc-cfadaa45b044" containerID="39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983" exitCode=0 Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.049067 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" event={"ID":"98110a48-08c0-4227-93dc-cfadaa45b044","Type":"ContainerDied","Data":"39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983"} Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.049156 4811 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.074581 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98110a48-08c0-4227-93dc-cfadaa45b044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9s768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:05Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.102168 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"554fedad-397a-4827-b5ad-48a681a7a2e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a3eac356f8c4a4e297fee6105ed49f0993737bc216516d008365d5cdb4bf010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ln785\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:05Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.117026 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:05Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.176866 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.176926 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.176941 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.176965 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.176987 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:05Z","lastTransitionTime":"2026-02-19T10:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.177130 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3788443dd8b315ed6c0e0f94f9ec1524525bde8a1bfdab4096a4af6b17155977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:05Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.189524 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nw57r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5177375f-9a2c-4430-8c2e-307ddf187cb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd28eb1f2ce842c3a00d30faed2acaaf4d4406342e9dc7f88595be430d0423ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5phq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nw57r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:05Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.203168 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:05Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.217299 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI0219 10:08:54.897941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0219 10:08:54.897951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0219 10:08:54.898073 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771495718\\\\\\\\\\\\\\\" (2026-02-19 10:08:37 +0000 UTC to 2026-03-21 10:08:38 +0000 UTC (now=2026-02-19 10:08:54.898015179 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898163 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\"\\\\nI0219 10:08:54.898747 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771495728\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771495728\\\\\\\\\\\\\\\" (2026-02-19 09:08:48 +0000 UTC to 2027-02-19 09:08:48 +0000 UTC (now=2026-02-19 10:08:54.898701245 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898777 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0219 10:08:54.898804 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0219 10:08:54.898823 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 10:08:54.900253 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900328 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900654 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0219 10:08:54.907320 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:05Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.228135 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8ede5032c6812f2b019538d253d2c07cf1659f30fcc4c52b27e6a091fb0758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c005a321618306472457a36ad24c5550bd0d2320bd395d6c022a84a7fdebbb01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:05Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.238536 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e102b35eadbc15e015f695e0f25f51576d5a023de5326f37421223e711ac407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:05Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.247578 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e42c80-bece-4066-abad-05bc661d33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac23aaef2d47d83268a51b9efb514cc10a9fdb47d2304f5cabf31c2ae46abfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v97zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:05Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.261662 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c18ee97d5e19f635541477ce6c6eb5fe1417551582bb34c9e0ec77a134d660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:05Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.275903 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:05Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.279709 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.279743 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.279755 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.279775 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.279786 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:05Z","lastTransitionTime":"2026-02-19T10:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.290870 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxbrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trc25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxbrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:05Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.312023 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:05Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.382577 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.382622 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.382633 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.382681 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.382693 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:05Z","lastTransitionTime":"2026-02-19T10:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.485003 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.485057 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.485067 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.485091 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.485104 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:05Z","lastTransitionTime":"2026-02-19T10:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.528461 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 03:58:12.417499916 +0000 UTC Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.588804 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.588822 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.588874 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:09:05 crc kubenswrapper[4811]: E0219 10:09:05.589347 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:09:05 crc kubenswrapper[4811]: E0219 10:09:05.589561 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:09:05 crc kubenswrapper[4811]: E0219 10:09:05.593492 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.594846 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.594896 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.594915 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.594934 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.594991 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:05Z","lastTransitionTime":"2026-02-19T10:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.698061 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.698099 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.698109 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.698125 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.698134 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:05Z","lastTransitionTime":"2026-02-19T10:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.725348 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.725434 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.725475 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.725506 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.725521 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:05Z","lastTransitionTime":"2026-02-19T10:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:05 crc kubenswrapper[4811]: E0219 10:09:05.744197 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aa043662-c579-48da-bd33-d995796f09b6\\\",\\\"systemUUID\\\":\\\"0ae76a21-a2ee-4264-8d70-e6dc0f503661\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:05Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.750234 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.750295 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.750309 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.750334 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.750351 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:05Z","lastTransitionTime":"2026-02-19T10:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:05 crc kubenswrapper[4811]: E0219 10:09:05.764731 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aa043662-c579-48da-bd33-d995796f09b6\\\",\\\"systemUUID\\\":\\\"0ae76a21-a2ee-4264-8d70-e6dc0f503661\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:05Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.770374 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.770416 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.770426 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.770459 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.770473 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:05Z","lastTransitionTime":"2026-02-19T10:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:05 crc kubenswrapper[4811]: E0219 10:09:05.787227 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aa043662-c579-48da-bd33-d995796f09b6\\\",\\\"systemUUID\\\":\\\"0ae76a21-a2ee-4264-8d70-e6dc0f503661\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:05Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.791622 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.791669 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.791680 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.791700 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.791714 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:05Z","lastTransitionTime":"2026-02-19T10:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:05 crc kubenswrapper[4811]: E0219 10:09:05.806223 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aa043662-c579-48da-bd33-d995796f09b6\\\",\\\"systemUUID\\\":\\\"0ae76a21-a2ee-4264-8d70-e6dc0f503661\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:05Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.811791 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.811852 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.811868 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.811899 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.811918 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:05Z","lastTransitionTime":"2026-02-19T10:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:05 crc kubenswrapper[4811]: E0219 10:09:05.827955 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aa043662-c579-48da-bd33-d995796f09b6\\\",\\\"systemUUID\\\":\\\"0ae76a21-a2ee-4264-8d70-e6dc0f503661\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:05Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:05 crc kubenswrapper[4811]: E0219 10:09:05.828157 4811 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.830488 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.830523 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.830575 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.830597 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.830607 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:05Z","lastTransitionTime":"2026-02-19T10:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.935497 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.935549 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.935568 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.935596 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:05 crc kubenswrapper[4811]: I0219 10:09:05.935613 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:05Z","lastTransitionTime":"2026-02-19T10:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.038779 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.038825 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.038862 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.038880 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.038891 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:06Z","lastTransitionTime":"2026-02-19T10:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.057679 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" event={"ID":"98110a48-08c0-4227-93dc-cfadaa45b044","Type":"ContainerStarted","Data":"bec84e9992f678e60bd39671c08434036c0ebd6746e540b35e3b7afb638d1836"} Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.057760 4811 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.073641 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:06Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.093055 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI0219 10:08:54.897941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0219 10:08:54.897951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0219 10:08:54.898073 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771495718\\\\\\\\\\\\\\\" (2026-02-19 10:08:37 +0000 UTC to 2026-03-21 10:08:38 +0000 UTC (now=2026-02-19 10:08:54.898015179 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898163 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\"\\\\nI0219 10:08:54.898747 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771495728\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771495728\\\\\\\\\\\\\\\" (2026-02-19 09:08:48 +0000 UTC to 2027-02-19 09:08:48 +0000 UTC (now=2026-02-19 10:08:54.898701245 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898777 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0219 10:08:54.898804 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0219 10:08:54.898823 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 10:08:54.900253 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900328 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900654 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0219 10:08:54.907320 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:06Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.105488 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3788443dd8b315ed6c0e0f94f9ec1524525bde8a1bfdab4096a4af6b17155977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:06Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.120624 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nw57r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5177375f-9a2c-4430-8c2e-307ddf187cb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd28eb1f2ce842c3a00d30faed2acaaf4d4406342e9dc7f88595be430d0423ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5phq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nw57r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:06Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.138762 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c18ee97d5e19f635541477ce6c6eb5fe1417551582bb34c9e0ec77a134d660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:06Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.142045 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.142110 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.142121 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.142146 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.142161 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:06Z","lastTransitionTime":"2026-02-19T10:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.153878 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:06Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.178810 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8ede5032c6812f2b019538d253d2c07cf1659f30fcc4c52b27e6a091fb0758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c005a321618306472457a36ad24c5550bd0d2320bd395d6c022a84a7fdebbb01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:06Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.194247 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e102b35eadbc15e015f695e0f25f51576d5a023de5326f37421223e711ac407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:06Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.211267 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e42c80-bece-4066-abad-05bc661d33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac23aaef2d47d83268a51b9efb514cc10a9fdb47d2304f5cabf31c2ae46abfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v97zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:06Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.235113 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:06Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.246225 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.246273 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.246286 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.246303 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.246318 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:06Z","lastTransitionTime":"2026-02-19T10:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.249993 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxbrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trc25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxbrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:06Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.270624 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:06Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.289907 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98110a48-08c0-4227-93dc-cfadaa45b044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec84e9992f678e60bd39671c08434036c0ebd6746e540b35e3b7afb638d1836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9s768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:06Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.311270 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"554fedad-397a-4827-b5ad-48a681a7a2e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a3eac356f8c4a4e297fee6105ed49f0993737bc216516d008365d5cdb4bf010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ln785\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:06Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.349228 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.349276 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.349289 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.349315 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.349326 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:06Z","lastTransitionTime":"2026-02-19T10:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.452477 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.452514 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.452523 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.452542 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.452552 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:06Z","lastTransitionTime":"2026-02-19T10:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.529080 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 11:17:58.80197678 +0000 UTC Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.555643 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.555701 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.555717 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.555741 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.555756 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:06Z","lastTransitionTime":"2026-02-19T10:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.658277 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.658769 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.658778 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.658794 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.658805 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:06Z","lastTransitionTime":"2026-02-19T10:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.761632 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.762374 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.762395 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.762418 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.762434 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:06Z","lastTransitionTime":"2026-02-19T10:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.864896 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.864939 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.864949 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.864965 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.864975 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:06Z","lastTransitionTime":"2026-02-19T10:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.968766 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.968858 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.968902 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.968928 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:06 crc kubenswrapper[4811]: I0219 10:09:06.968941 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:06Z","lastTransitionTime":"2026-02-19T10:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.064745 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ln785_554fedad-397a-4827-b5ad-48a681a7a2e5/ovnkube-controller/0.log" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.067398 4811 generic.go:334] "Generic (PLEG): container finished" podID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerID="6a3eac356f8c4a4e297fee6105ed49f0993737bc216516d008365d5cdb4bf010" exitCode=1 Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.067520 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" event={"ID":"554fedad-397a-4827-b5ad-48a681a7a2e5","Type":"ContainerDied","Data":"6a3eac356f8c4a4e297fee6105ed49f0993737bc216516d008365d5cdb4bf010"} Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.068777 4811 scope.go:117] "RemoveContainer" containerID="6a3eac356f8c4a4e297fee6105ed49f0993737bc216516d008365d5cdb4bf010" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.070596 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.070629 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.070642 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.070663 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.070677 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:07Z","lastTransitionTime":"2026-02-19T10:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.089702 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:07Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.110534 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxbrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trc25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxbrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:07Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.127044 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:07Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.145338 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98110a48-08c0-4227-93dc-cfadaa45b044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec84e9992f678e60bd39671c08434036c0ebd6746e540b35e3b7afb638d1836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9s768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:07Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.169737 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"554fedad-397a-4827-b5ad-48a681a7a2e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a3eac356f8c4a4e297fee6105ed49f0993737bc216516d008365d5cdb4bf010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a3eac356f8c4a4e297fee6105ed49f0993737bc216516d008365d5cdb4bf010\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"message\\\":\\\"0219 10:09:06.334248 6012 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 10:09:06.334860 6012 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 10:09:06.334897 6012 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 10:09:06.334936 6012 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 10:09:06.334949 6012 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 10:09:06.334997 6012 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 10:09:06.335017 6012 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 10:09:06.335023 6012 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 10:09:06.335036 6012 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 10:09:06.335069 6012 factory.go:656] Stopping watch factory\\\\nI0219 10:09:06.335090 6012 ovnkube.go:599] Stopped ovnkube\\\\nI0219 10:09:06.335120 6012 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 10:09:06.335133 6012 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 10:09:06.335142 6012 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 10:09:06.335152 6012 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 10:09:06.335165 6012 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ln785\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:07Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.181530 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.181590 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.181603 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.181631 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.181645 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:07Z","lastTransitionTime":"2026-02-19T10:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.191414 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:07Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.209483 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI0219 10:08:54.897941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0219 10:08:54.897951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0219 10:08:54.898073 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771495718\\\\\\\\\\\\\\\" (2026-02-19 10:08:37 +0000 UTC to 2026-03-21 10:08:38 +0000 UTC (now=2026-02-19 10:08:54.898015179 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898163 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\"\\\\nI0219 10:08:54.898747 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771495728\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771495728\\\\\\\\\\\\\\\" (2026-02-19 09:08:48 +0000 UTC to 2027-02-19 09:08:48 +0000 UTC (now=2026-02-19 10:08:54.898701245 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898777 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0219 10:08:54.898804 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0219 10:08:54.898823 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 10:08:54.900253 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900328 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900654 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0219 10:08:54.907320 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:07Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.224513 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3788443dd8b315ed6c0e0f94f9ec1524525bde8a1bfdab4096a4af6b17155977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:07Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.237842 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nw57r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5177375f-9a2c-4430-8c2e-307ddf187cb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd28eb1f2ce842c3a00d30faed2acaaf4d4406342e9dc7f88595be430d0423ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5phq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nw57r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:07Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.252985 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c18ee97d5e19f635541477ce6c6eb5fe1417551582bb34c9e0ec77a134d660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:07Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.272255 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:07Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.284798 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.284842 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.284856 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.284874 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.284885 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:07Z","lastTransitionTime":"2026-02-19T10:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.286634 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8ede5032c6812f2b019538d253d2c07cf1659f30fcc4c52b27e6a091fb0758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c005a321618306472457a36ad24c5550bd0d2320bd395d6c022a84a7fdebbb01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:07Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.306764 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e102b35eadbc15e015f695e0f25f51576d5a023de5326f37421223e711ac407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:07Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.319810 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e42c80-bece-4066-abad-05bc661d33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac23aaef2d47d83268a51b9efb514cc10a9fdb47d2304f5cabf31c2ae46abfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v97zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:07Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.387505 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.387541 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.387555 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.387571 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.387583 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:07Z","lastTransitionTime":"2026-02-19T10:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.489406 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.489441 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.489477 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.489491 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.489501 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:07Z","lastTransitionTime":"2026-02-19T10:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.530277 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 01:42:59.717562165 +0000 UTC Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.582597 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.582634 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.582688 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:09:07 crc kubenswrapper[4811]: E0219 10:09:07.582724 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:09:07 crc kubenswrapper[4811]: E0219 10:09:07.582813 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:09:07 crc kubenswrapper[4811]: E0219 10:09:07.582886 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.591174 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.591197 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.591205 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.591218 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.591226 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:07Z","lastTransitionTime":"2026-02-19T10:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.693949 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.693990 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.693998 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.694011 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.694019 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:07Z","lastTransitionTime":"2026-02-19T10:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.797371 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.797422 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.797431 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.797459 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.797470 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:07Z","lastTransitionTime":"2026-02-19T10:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.899942 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.899983 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.899993 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.900008 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:07 crc kubenswrapper[4811]: I0219 10:09:07.900019 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:07Z","lastTransitionTime":"2026-02-19T10:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.002677 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.002727 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.002740 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.002759 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.002772 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:08Z","lastTransitionTime":"2026-02-19T10:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.074283 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ln785_554fedad-397a-4827-b5ad-48a681a7a2e5/ovnkube-controller/1.log" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.074926 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ln785_554fedad-397a-4827-b5ad-48a681a7a2e5/ovnkube-controller/0.log" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.079147 4811 generic.go:334] "Generic (PLEG): container finished" podID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerID="836e730a514249add09d2f93e7bf147a27367952c1716d7efe636bb7d8097961" exitCode=1 Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.079208 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" event={"ID":"554fedad-397a-4827-b5ad-48a681a7a2e5","Type":"ContainerDied","Data":"836e730a514249add09d2f93e7bf147a27367952c1716d7efe636bb7d8097961"} Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.079251 4811 scope.go:117] "RemoveContainer" containerID="6a3eac356f8c4a4e297fee6105ed49f0993737bc216516d008365d5cdb4bf010" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.080429 4811 scope.go:117] "RemoveContainer" containerID="836e730a514249add09d2f93e7bf147a27367952c1716d7efe636bb7d8097961" Feb 19 10:09:08 crc kubenswrapper[4811]: E0219 10:09:08.080754 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ln785_openshift-ovn-kubernetes(554fedad-397a-4827-b5ad-48a681a7a2e5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.095149 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e42c80-bece-4066-abad-05bc661d33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac23aaef2d47d83268a51b9efb514cc10a9fdb47d2304f5cabf31c2ae46abfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v97zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:08Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.105700 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.105754 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.105765 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.105784 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.105801 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:08Z","lastTransitionTime":"2026-02-19T10:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.111602 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c18ee97d5e19f635541477ce6c6eb5fe1417551582bb34c9e0ec77a134d660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:08Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.128753 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:08Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.143388 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8ede5032c6812f2b019538d253d2c07cf1659f30fcc4c52b27e6a091fb0758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c005a321618306472457a36ad24c5550bd0d2320bd395d6c022a84a7fdebbb01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:08Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.154187 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e102b35eadbc15e015f695e0f25f51576d5a023de5326f37421223e711ac407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:08Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.168085 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:08Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.184271 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxbrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trc25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxbrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:08Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.199653 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:08Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.208889 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.208934 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.208947 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.208966 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.208979 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:08Z","lastTransitionTime":"2026-02-19T10:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.218109 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98110a48-08c0-4227-93dc-cfadaa45b044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec84e9992f678e60bd39671c08434036c0ebd6746e540b35e3b7afb638d1836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9s768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:08Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.240872 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"554fedad-397a-4827-b5ad-48a681a7a2e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e730a514249add09d2f93e7bf147a27367952c1716d7efe636bb7d8097961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a3eac356f8c4a4e297fee6105ed49f0993737bc216516d008365d5cdb4bf010\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"message\\\":\\\"0219 10:09:06.334248 6012 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 10:09:06.334860 6012 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 10:09:06.334897 6012 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 10:09:06.334936 6012 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 10:09:06.334949 6012 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 10:09:06.334997 6012 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 10:09:06.335017 6012 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 10:09:06.335023 6012 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 10:09:06.335036 6012 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 10:09:06.335069 6012 factory.go:656] Stopping watch factory\\\\nI0219 10:09:06.335090 6012 ovnkube.go:599] Stopped ovnkube\\\\nI0219 10:09:06.335120 6012 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 10:09:06.335133 6012 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 10:09:06.335142 6012 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 10:09:06.335152 6012 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 10:09:06.335165 6012 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://836e730a514249add09d2f93e7bf147a27367952c1716d7efe636bb7d8097961\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:07Z\\\",\\\"message\\\":\\\"7.922925 6244 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 10:09:07.923380 6244 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 10:09:07.923407 6244 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 10:09:07.923417 6244 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 10:09:07.923375 6244 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 10:09:07.923500 6244 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 10:09:07.923499 6244 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 10:09:07.923544 6244 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 10:09:07.923572 6244 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 10:09:07.923676 6244 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 10:09:07.923681 6244 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 10:09:07.923771 6244 factory.go:656] Stopping watch factory\\\\nI0219 10:09:07.923789 6244 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 10:09:07.923832 6244 ovnkube.go:599] Stopped ovnkube\\\\nI0219 10:09:07.923891 6244 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 10:09:07.924046 6244 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ln785\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:08Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.254471 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nw57r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5177375f-9a2c-4430-8c2e-307ddf187cb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd28eb1f2ce842c3a00d30faed2acaaf4d4406342e9dc7f88595be430d0423ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5phq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nw57r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:08Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.272481 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:08Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.290370 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI0219 10:08:54.897941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0219 10:08:54.897951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0219 10:08:54.898073 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771495718\\\\\\\\\\\\\\\" (2026-02-19 10:08:37 +0000 UTC to 2026-03-21 10:08:38 +0000 UTC (now=2026-02-19 10:08:54.898015179 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898163 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\"\\\\nI0219 10:08:54.898747 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771495728\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771495728\\\\\\\\\\\\\\\" (2026-02-19 09:08:48 +0000 UTC to 2027-02-19 09:08:48 +0000 UTC (now=2026-02-19 10:08:54.898701245 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898777 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0219 10:08:54.898804 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0219 10:08:54.898823 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 10:08:54.900253 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900328 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900654 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0219 10:08:54.907320 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:08Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.302988 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3788443dd8b315ed6c0e0f94f9ec1524525bde8a1bfdab4096a4af6b17155977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:08Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.312481 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.312521 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.312537 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.312557 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.312571 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:08Z","lastTransitionTime":"2026-02-19T10:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.414875 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.414906 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.414914 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.414928 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.414936 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:08Z","lastTransitionTime":"2026-02-19T10:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.518401 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.518475 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.518495 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.518517 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.518531 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:08Z","lastTransitionTime":"2026-02-19T10:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.530641 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 07:54:30.693089226 +0000 UTC Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.621978 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.622035 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.622053 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.622081 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.622102 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:08Z","lastTransitionTime":"2026-02-19T10:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.724709 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.724778 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.724796 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.724827 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.724848 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:08Z","lastTransitionTime":"2026-02-19T10:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.829002 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.829041 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.829055 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.829075 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.829090 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:08Z","lastTransitionTime":"2026-02-19T10:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.932152 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.932213 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.932225 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.932245 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:08 crc kubenswrapper[4811]: I0219 10:09:08.932260 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:08Z","lastTransitionTime":"2026-02-19T10:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.036034 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.036089 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.036101 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.036123 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.036137 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:09Z","lastTransitionTime":"2026-02-19T10:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.084738 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ln785_554fedad-397a-4827-b5ad-48a681a7a2e5/ovnkube-controller/1.log" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.139673 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.139713 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.139726 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.139741 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.139750 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:09Z","lastTransitionTime":"2026-02-19T10:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.242471 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.242517 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.242528 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.242547 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.242559 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:09Z","lastTransitionTime":"2026-02-19T10:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.345278 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.345707 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.345808 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.345889 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.345973 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:09Z","lastTransitionTime":"2026-02-19T10:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.422128 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sx75t"] Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.422683 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sx75t" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.426316 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.426739 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.435954 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c742c05c-adc4-491d-994f-036af2fda249-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-sx75t\" (UID: \"c742c05c-adc4-491d-994f-036af2fda249\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sx75t" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.436016 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c742c05c-adc4-491d-994f-036af2fda249-env-overrides\") pod \"ovnkube-control-plane-749d76644c-sx75t\" (UID: \"c742c05c-adc4-491d-994f-036af2fda249\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sx75t" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.436061 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c742c05c-adc4-491d-994f-036af2fda249-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-sx75t\" (UID: \"c742c05c-adc4-491d-994f-036af2fda249\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sx75t" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.436123 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngn6v\" (UniqueName: \"kubernetes.io/projected/c742c05c-adc4-491d-994f-036af2fda249-kube-api-access-ngn6v\") pod \"ovnkube-control-plane-749d76644c-sx75t\" (UID: \"c742c05c-adc4-491d-994f-036af2fda249\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sx75t" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.444256 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:09Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.449004 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.449045 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.449059 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.449080 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.449093 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:09Z","lastTransitionTime":"2026-02-19T10:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.463551 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI0219 10:08:54.897941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0219 10:08:54.897951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0219 10:08:54.898073 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771495718\\\\\\\\\\\\\\\" (2026-02-19 10:08:37 +0000 UTC to 2026-03-21 10:08:38 +0000 UTC (now=2026-02-19 10:08:54.898015179 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898163 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\"\\\\nI0219 10:08:54.898747 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771495728\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771495728\\\\\\\\\\\\\\\" (2026-02-19 09:08:48 +0000 UTC to 2027-02-19 09:08:48 +0000 UTC (now=2026-02-19 10:08:54.898701245 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898777 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0219 10:08:54.898804 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0219 10:08:54.898823 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 10:08:54.900253 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900328 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900654 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0219 10:08:54.907320 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:09Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.477192 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3788443dd8b315ed6c0e0f94f9ec1524525bde8a1bfdab4096a4af6b17155977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:09Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.495412 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nw57r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5177375f-9a2c-4430-8c2e-307ddf187cb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd28eb1f2ce842c3a00d30faed2acaaf4d4406342e9dc7f88595be430d0423ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5phq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nw57r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:09Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.511481 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c18ee97d5e19f635541477ce6c6eb5fe1417551582bb34c9e0ec77a134d660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:09Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.527044 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:09Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.531078 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 17:42:55.88676461 +0000 UTC Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.537142 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngn6v\" (UniqueName: \"kubernetes.io/projected/c742c05c-adc4-491d-994f-036af2fda249-kube-api-access-ngn6v\") pod \"ovnkube-control-plane-749d76644c-sx75t\" (UID: \"c742c05c-adc4-491d-994f-036af2fda249\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sx75t" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.537197 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c742c05c-adc4-491d-994f-036af2fda249-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-sx75t\" (UID: \"c742c05c-adc4-491d-994f-036af2fda249\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sx75t" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.537247 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c742c05c-adc4-491d-994f-036af2fda249-env-overrides\") pod \"ovnkube-control-plane-749d76644c-sx75t\" (UID: \"c742c05c-adc4-491d-994f-036af2fda249\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sx75t" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.537280 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c742c05c-adc4-491d-994f-036af2fda249-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-sx75t\" (UID: \"c742c05c-adc4-491d-994f-036af2fda249\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sx75t" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.538331 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c742c05c-adc4-491d-994f-036af2fda249-env-overrides\") pod \"ovnkube-control-plane-749d76644c-sx75t\" (UID: \"c742c05c-adc4-491d-994f-036af2fda249\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sx75t" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.538343 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c742c05c-adc4-491d-994f-036af2fda249-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-sx75t\" (UID: \"c742c05c-adc4-491d-994f-036af2fda249\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sx75t" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.542195 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8ede5032c6812f2b019538d253d2c07cf1659f30fcc4c52b27e6a091fb0758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c005a321618306472457a36ad24c5550bd0d2320bd395d6c022a84a7fdebbb01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:09Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.547152 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c742c05c-adc4-491d-994f-036af2fda249-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-sx75t\" (UID: \"c742c05c-adc4-491d-994f-036af2fda249\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sx75t" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.551277 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.551310 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.551319 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.551334 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.551345 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:09Z","lastTransitionTime":"2026-02-19T10:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.559539 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e102b35eadbc15e015f695e0f25f51576d5a023de5326f37421223e711ac407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:09Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.568259 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngn6v\" (UniqueName: \"kubernetes.io/projected/c742c05c-adc4-491d-994f-036af2fda249-kube-api-access-ngn6v\") pod \"ovnkube-control-plane-749d76644c-sx75t\" (UID: \"c742c05c-adc4-491d-994f-036af2fda249\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sx75t" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.578061 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e42c80-bece-4066-abad-05bc661d33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac23aaef2d47d83268a51b9efb514cc10a9fdb47d2304f5cabf31c2ae46abfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v97zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:09Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.582710 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.582806 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:09:09 crc kubenswrapper[4811]: E0219 10:09:09.582891 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.582708 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:09:09 crc kubenswrapper[4811]: E0219 10:09:09.583021 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:09:09 crc kubenswrapper[4811]: E0219 10:09:09.583901 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.591671 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:09Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.607965 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxbrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trc25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxbrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:09Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.619687 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sx75t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c742c05c-adc4-491d-994f-036af2fda249\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sx75t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:09Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.634256 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:09Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.649975 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98110a48-08c0-4227-93dc-cfadaa45b044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec84e9992f678e60bd39671c08434036c0ebd6746e540b35e3b7afb638d1836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9s768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:09Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.655411 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.655480 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.655491 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.655507 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.655519 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:09Z","lastTransitionTime":"2026-02-19T10:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.672777 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"554fedad-397a-4827-b5ad-48a681a7a2e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e730a514249add09d2f93e7bf147a27367952c1716d7efe636bb7d8097961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a3eac356f8c4a4e297fee6105ed49f0993737bc216516d008365d5cdb4bf010\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"message\\\":\\\"0219 10:09:06.334248 6012 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 10:09:06.334860 6012 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 10:09:06.334897 6012 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 10:09:06.334936 6012 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 10:09:06.334949 6012 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 10:09:06.334997 6012 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 10:09:06.335017 6012 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 10:09:06.335023 6012 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 10:09:06.335036 6012 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 10:09:06.335069 6012 factory.go:656] Stopping watch factory\\\\nI0219 10:09:06.335090 6012 ovnkube.go:599] Stopped ovnkube\\\\nI0219 10:09:06.335120 6012 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 10:09:06.335133 6012 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 10:09:06.335142 6012 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 10:09:06.335152 6012 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 10:09:06.335165 6012 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://836e730a514249add09d2f93e7bf147a27367952c1716d7efe636bb7d8097961\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:07Z\\\",\\\"message\\\":\\\"7.922925 6244 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 10:09:07.923380 6244 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 10:09:07.923407 6244 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 10:09:07.923417 6244 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 10:09:07.923375 6244 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 10:09:07.923500 6244 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 10:09:07.923499 6244 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 10:09:07.923544 6244 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 10:09:07.923572 6244 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 10:09:07.923676 6244 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 10:09:07.923681 6244 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 10:09:07.923771 6244 factory.go:656] Stopping watch factory\\\\nI0219 10:09:07.923789 6244 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 10:09:07.923832 6244 ovnkube.go:599] Stopped ovnkube\\\\nI0219 10:09:07.923891 6244 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 10:09:07.924046 6244 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ln785\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:09Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.736083 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sx75t" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.757988 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.758023 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.758031 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.758044 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.758054 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:09Z","lastTransitionTime":"2026-02-19T10:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:09 crc kubenswrapper[4811]: W0219 10:09:09.758426 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc742c05c_adc4_491d_994f_036af2fda249.slice/crio-1c4d483e2051ccd2f81d08213963083ef3d89490761eba8a827e8ba9f75fd099 WatchSource:0}: Error finding container 1c4d483e2051ccd2f81d08213963083ef3d89490761eba8a827e8ba9f75fd099: Status 404 returned error can't find the container with id 1c4d483e2051ccd2f81d08213963083ef3d89490761eba8a827e8ba9f75fd099 Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.861420 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.861549 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.861570 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.861595 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.861612 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:09Z","lastTransitionTime":"2026-02-19T10:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.964960 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.965413 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.965467 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.965487 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:09 crc kubenswrapper[4811]: I0219 10:09:09.965501 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:09Z","lastTransitionTime":"2026-02-19T10:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.067775 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.067802 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.067811 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.067826 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.067836 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:10Z","lastTransitionTime":"2026-02-19T10:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.093825 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sx75t" event={"ID":"c742c05c-adc4-491d-994f-036af2fda249","Type":"ContainerStarted","Data":"a1f41e09b6058e874ace3c98bd66981467c7d16ba4df94700158d866e48ffb9b"} Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.093888 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sx75t" event={"ID":"c742c05c-adc4-491d-994f-036af2fda249","Type":"ContainerStarted","Data":"e49365be7e1db37aca235d55ae33418e89574caaa0ff6a67b0ee3a950e734220"} Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.093903 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sx75t" event={"ID":"c742c05c-adc4-491d-994f-036af2fda249","Type":"ContainerStarted","Data":"1c4d483e2051ccd2f81d08213963083ef3d89490761eba8a827e8ba9f75fd099"} Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.106150 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:10Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.119709 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98110a48-08c0-4227-93dc-cfadaa45b044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec84e9992f678e60bd39671c08434036c0ebd6746e540b35e3b7afb638d1836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9s768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:10Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.139424 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"554fedad-397a-4827-b5ad-48a681a7a2e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e730a514249add09d2f93e7bf147a27367952c1716d7efe636bb7d8097961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a3eac356f8c4a4e297fee6105ed49f0993737bc216516d008365d5cdb4bf010\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"message\\\":\\\"0219 10:09:06.334248 6012 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 10:09:06.334860 6012 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 10:09:06.334897 6012 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 10:09:06.334936 6012 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 10:09:06.334949 6012 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 10:09:06.334997 6012 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 10:09:06.335017 6012 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 10:09:06.335023 6012 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 10:09:06.335036 6012 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 10:09:06.335069 6012 factory.go:656] Stopping watch factory\\\\nI0219 10:09:06.335090 6012 ovnkube.go:599] Stopped ovnkube\\\\nI0219 10:09:06.335120 6012 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 10:09:06.335133 6012 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 10:09:06.335142 6012 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 10:09:06.335152 6012 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 10:09:06.335165 6012 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://836e730a514249add09d2f93e7bf147a27367952c1716d7efe636bb7d8097961\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:07Z\\\",\\\"message\\\":\\\"7.922925 6244 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 10:09:07.923380 6244 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 10:09:07.923407 6244 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 10:09:07.923417 6244 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 10:09:07.923375 6244 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 10:09:07.923500 6244 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 10:09:07.923499 6244 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 10:09:07.923544 6244 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 10:09:07.923572 6244 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 10:09:07.923676 6244 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 10:09:07.923681 6244 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 10:09:07.923771 6244 factory.go:656] Stopping watch factory\\\\nI0219 10:09:07.923789 6244 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 10:09:07.923832 6244 ovnkube.go:599] Stopped ovnkube\\\\nI0219 10:09:07.923891 6244 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 10:09:07.924046 6244 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ln785\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:10Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.156026 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI0219 10:08:54.897941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0219 10:08:54.897951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0219 10:08:54.898073 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771495718\\\\\\\\\\\\\\\" (2026-02-19 10:08:37 +0000 UTC to 2026-03-21 10:08:38 +0000 UTC (now=2026-02-19 10:08:54.898015179 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898163 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\"\\\\nI0219 10:08:54.898747 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771495728\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771495728\\\\\\\\\\\\\\\" (2026-02-19 09:08:48 +0000 UTC to 2027-02-19 09:08:48 +0000 UTC (now=2026-02-19 10:08:54.898701245 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898777 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0219 10:08:54.898804 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0219 10:08:54.898823 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 10:08:54.900253 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900328 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900654 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0219 10:08:54.907320 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:10Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.167507 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3788443dd8b315ed6c0e0f94f9ec1524525bde8a1bfdab4096a4af6b17155977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:10Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.170397 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.170430 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.170442 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.170484 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.170501 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:10Z","lastTransitionTime":"2026-02-19T10:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.188375 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nw57r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5177375f-9a2c-4430-8c2e-307ddf187cb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd28eb1f2ce842c3a00d30faed2acaaf4d4406342e9dc7f88595be430d0423ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5phq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nw57r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:10Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.217419 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:10Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.243216 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:10Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.265634 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8ede5032c6812f2b019538d253d2c07cf1659f30fcc4c52b27e6a091fb0758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c005a321618306472457a36ad24c5550bd0d2320bd395d6c022a84a7fdebbb01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:10Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.273143 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.273176 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.273188 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.273203 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.273214 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:10Z","lastTransitionTime":"2026-02-19T10:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.280284 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e102b35eadbc15e015f695e0f25f51576d5a023de5326f37421223e711ac407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:10Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.293260 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e42c80-bece-4066-abad-05bc661d33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac23aaef2d47d83268a51b9efb514cc10a9fdb47d2304f5cabf31c2ae46abfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v97zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:10Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.312720 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c18ee97d5e19f635541477ce6c6eb5fe1417551582bb34c9e0ec77a134d660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:10Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.327035 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:10Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.340304 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxbrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trc25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxbrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:10Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.354568 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sx75t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c742c05c-adc4-491d-994f-036af2fda249\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e49365be7e1db37aca235d55ae33418e89574caaa0ff6a67b0ee3a950e734220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f41e09b6058e874ace3c98bd66981467c7d16ba4df94700158d866e48ffb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sx75t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:10Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.375479 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.375531 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.375547 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.375568 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.375581 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:10Z","lastTransitionTime":"2026-02-19T10:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.478238 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.478295 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.478304 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.478321 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.478333 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:10Z","lastTransitionTime":"2026-02-19T10:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.532190 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 16:22:39.799462881 +0000 UTC Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.580711 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.580753 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.580766 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.580783 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.580795 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:10Z","lastTransitionTime":"2026-02-19T10:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.684212 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.684270 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.684285 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.684305 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.684317 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:10Z","lastTransitionTime":"2026-02-19T10:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.787315 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.787363 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.787374 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.787394 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.787406 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:10Z","lastTransitionTime":"2026-02-19T10:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.890480 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.890524 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.890535 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.890551 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.890563 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:10Z","lastTransitionTime":"2026-02-19T10:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.904675 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-n5z6r"] Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.905284 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:09:10 crc kubenswrapper[4811]: E0219 10:09:10.905355 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.927261 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"554fedad-397a-4827-b5ad-48a681a7a2e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e730a514249add09d2f93e7bf147a27367952c1716d7efe636bb7d8097961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a3eac356f8c4a4e297fee6105ed49f0993737bc216516d008365d5cdb4bf010\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"message\\\":\\\"0219 10:09:06.334248 6012 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 10:09:06.334860 6012 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 10:09:06.334897 6012 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 10:09:06.334936 6012 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 10:09:06.334949 6012 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 10:09:06.334997 6012 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 10:09:06.335017 6012 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 10:09:06.335023 6012 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 10:09:06.335036 6012 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 10:09:06.335069 6012 factory.go:656] Stopping watch factory\\\\nI0219 10:09:06.335090 6012 ovnkube.go:599] Stopped ovnkube\\\\nI0219 10:09:06.335120 6012 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 10:09:06.335133 6012 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 10:09:06.335142 6012 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 10:09:06.335152 6012 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 10:09:06.335165 6012 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://836e730a514249add09d2f93e7bf147a27367952c1716d7efe636bb7d8097961\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:07Z\\\",\\\"message\\\":\\\"7.922925 6244 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 10:09:07.923380 6244 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 10:09:07.923407 6244 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 10:09:07.923417 6244 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 10:09:07.923375 6244 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 10:09:07.923500 6244 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 10:09:07.923499 6244 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 10:09:07.923544 6244 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 10:09:07.923572 6244 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 10:09:07.923676 6244 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 10:09:07.923681 6244 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 10:09:07.923771 6244 factory.go:656] Stopping watch factory\\\\nI0219 10:09:07.923789 6244 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 10:09:07.923832 6244 ovnkube.go:599] Stopped ovnkube\\\\nI0219 10:09:07.923891 6244 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 10:09:07.924046 6244 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ln785\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:10Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.938626 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n5z6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d4fd8e-224d-468e-b6e2-a292a6e1949f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b57g5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b57g5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n5z6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:10Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.950705 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35d4fd8e-224d-468e-b6e2-a292a6e1949f-metrics-certs\") pod \"network-metrics-daemon-n5z6r\" (UID: \"35d4fd8e-224d-468e-b6e2-a292a6e1949f\") " pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.950760 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b57g5\" (UniqueName: \"kubernetes.io/projected/35d4fd8e-224d-468e-b6e2-a292a6e1949f-kube-api-access-b57g5\") pod \"network-metrics-daemon-n5z6r\" (UID: \"35d4fd8e-224d-468e-b6e2-a292a6e1949f\") " pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.950736 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:10Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.964634 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98110a48-08c0-4227-93dc-cfadaa45b044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec84e9992f678e60bd39671c08434036c0ebd6746e540b35e3b7afb638d1836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9s768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:10Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.976124 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3788443dd8b315ed6c0e0f94f9ec1524525bde8a1bfdab4096a4af6b17155977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:10Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.989400 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nw57r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5177375f-9a2c-4430-8c2e-307ddf187cb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd28eb1f2ce842c3a00d30faed2acaaf4d4406342e9dc7f88595be430d0423ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5phq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nw57r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:10Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.992876 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.992905 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.992913 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.992926 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:10 crc kubenswrapper[4811]: I0219 10:09:10.992935 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:10Z","lastTransitionTime":"2026-02-19T10:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.002761 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:11Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.014956 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI0219 10:08:54.897941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0219 10:08:54.897951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0219 10:08:54.898073 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771495718\\\\\\\\\\\\\\\" (2026-02-19 10:08:37 +0000 UTC to 2026-03-21 10:08:38 +0000 UTC (now=2026-02-19 10:08:54.898015179 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898163 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\"\\\\nI0219 10:08:54.898747 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771495728\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771495728\\\\\\\\\\\\\\\" (2026-02-19 09:08:48 +0000 UTC to 2027-02-19 09:08:48 +0000 UTC (now=2026-02-19 10:08:54.898701245 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898777 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0219 10:08:54.898804 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0219 10:08:54.898823 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 10:08:54.900253 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900328 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900654 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0219 10:08:54.907320 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:11Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.031557 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e102b35eadbc15e015f695e0f25f51576d5a023de5326f37421223e711ac407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:11Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.045790 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e42c80-bece-4066-abad-05bc661d33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac23aaef2d47d83268a51b9efb514cc10a9fdb47d2304f5cabf31c2ae46abfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v97zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:11Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.052258 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35d4fd8e-224d-468e-b6e2-a292a6e1949f-metrics-certs\") pod \"network-metrics-daemon-n5z6r\" (UID: \"35d4fd8e-224d-468e-b6e2-a292a6e1949f\") " pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.052437 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b57g5\" (UniqueName: \"kubernetes.io/projected/35d4fd8e-224d-468e-b6e2-a292a6e1949f-kube-api-access-b57g5\") pod \"network-metrics-daemon-n5z6r\" (UID: \"35d4fd8e-224d-468e-b6e2-a292a6e1949f\") " pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:09:11 crc kubenswrapper[4811]: E0219 10:09:11.052385 4811 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 10:09:11 crc kubenswrapper[4811]: E0219 10:09:11.052804 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35d4fd8e-224d-468e-b6e2-a292a6e1949f-metrics-certs podName:35d4fd8e-224d-468e-b6e2-a292a6e1949f nodeName:}" failed. No retries permitted until 2026-02-19 10:09:11.552783921 +0000 UTC m=+40.079248627 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/35d4fd8e-224d-468e-b6e2-a292a6e1949f-metrics-certs") pod "network-metrics-daemon-n5z6r" (UID: "35d4fd8e-224d-468e-b6e2-a292a6e1949f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.062540 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c18ee97d5e19f635541477ce6c6eb5fe1417551582bb34c9e0ec77a134d660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:11Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.074877 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b57g5\" (UniqueName: \"kubernetes.io/projected/35d4fd8e-224d-468e-b6e2-a292a6e1949f-kube-api-access-b57g5\") pod \"network-metrics-daemon-n5z6r\" (UID: \"35d4fd8e-224d-468e-b6e2-a292a6e1949f\") " pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.077759 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:11Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.096331 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.096367 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.096377 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.096406 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.096416 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:11Z","lastTransitionTime":"2026-02-19T10:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.098895 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8ede5032c6812f2b019538d253d2c07cf1659f30fcc4c52b27e6a091fb0758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c005a321618306472457a36ad24c5550bd0d2320bd395d6c022a84a7fdebbb01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:11Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.115055 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sx75t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c742c05c-adc4-491d-994f-036af2fda249\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e49365be7e1db37aca235d55ae33418e89574caaa0ff6a67b0ee3a950e734220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f41e09b6058e874ace3c98bd66981467c7d16ba4df94700158d866e48ffb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sx75t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:11Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.133300 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:11Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.150027 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxbrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trc25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxbrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:11Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.199656 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.199965 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.200087 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.200255 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.200374 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:11Z","lastTransitionTime":"2026-02-19T10:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.303636 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.303947 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.304037 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.304138 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.304238 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:11Z","lastTransitionTime":"2026-02-19T10:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.408085 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.408183 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.408205 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.408283 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.408312 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:11Z","lastTransitionTime":"2026-02-19T10:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.510864 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.510910 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.510926 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.510942 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.510952 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:11Z","lastTransitionTime":"2026-02-19T10:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.533347 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 03:55:31.025020672 +0000 UTC Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.557226 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35d4fd8e-224d-468e-b6e2-a292a6e1949f-metrics-certs\") pod \"network-metrics-daemon-n5z6r\" (UID: \"35d4fd8e-224d-468e-b6e2-a292a6e1949f\") " pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:09:11 crc kubenswrapper[4811]: E0219 10:09:11.557361 4811 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 10:09:11 crc kubenswrapper[4811]: E0219 10:09:11.557463 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35d4fd8e-224d-468e-b6e2-a292a6e1949f-metrics-certs podName:35d4fd8e-224d-468e-b6e2-a292a6e1949f nodeName:}" failed. No retries permitted until 2026-02-19 10:09:12.557430864 +0000 UTC m=+41.083895630 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/35d4fd8e-224d-468e-b6e2-a292a6e1949f-metrics-certs") pod "network-metrics-daemon-n5z6r" (UID: "35d4fd8e-224d-468e-b6e2-a292a6e1949f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.583166 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.583176 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:09:11 crc kubenswrapper[4811]: E0219 10:09:11.583462 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:09:11 crc kubenswrapper[4811]: E0219 10:09:11.583326 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.583713 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:09:11 crc kubenswrapper[4811]: E0219 10:09:11.583786 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.613436 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.613553 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.613570 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.613596 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.613607 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:11Z","lastTransitionTime":"2026-02-19T10:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.658036 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:09:11 crc kubenswrapper[4811]: E0219 10:09:11.658263 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:09:27.658229437 +0000 UTC m=+56.184694123 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.658525 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.658714 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:09:11 crc kubenswrapper[4811]: E0219 10:09:11.658735 4811 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 10:09:11 crc kubenswrapper[4811]: E0219 10:09:11.658917 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 10:09:27.658896433 +0000 UTC m=+56.185361179 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 10:09:11 crc kubenswrapper[4811]: E0219 10:09:11.658776 4811 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 10:09:11 crc kubenswrapper[4811]: E0219 10:09:11.659085 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 10:09:27.659076057 +0000 UTC m=+56.185540743 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.716071 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.716165 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.716180 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.716204 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.716217 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:11Z","lastTransitionTime":"2026-02-19T10:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.759712 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.759793 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:09:11 crc kubenswrapper[4811]: E0219 10:09:11.759923 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 10:09:11 crc kubenswrapper[4811]: E0219 10:09:11.759940 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 10:09:11 crc kubenswrapper[4811]: E0219 10:09:11.759953 4811 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 10:09:11 crc kubenswrapper[4811]: E0219 10:09:11.760000 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 10:09:27.759985513 +0000 UTC m=+56.286450199 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 10:09:11 crc kubenswrapper[4811]: E0219 10:09:11.760371 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 10:09:11 crc kubenswrapper[4811]: E0219 10:09:11.760385 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 10:09:11 crc kubenswrapper[4811]: E0219 10:09:11.760395 4811 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 10:09:11 crc kubenswrapper[4811]: E0219 10:09:11.760421 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 10:09:27.760413474 +0000 UTC m=+56.286878160 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.819665 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.819931 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.819947 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.819966 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.819986 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:11Z","lastTransitionTime":"2026-02-19T10:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.922942 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.923004 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.923020 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.923040 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:11 crc kubenswrapper[4811]: I0219 10:09:11.923054 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:11Z","lastTransitionTime":"2026-02-19T10:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.026066 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.026136 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.026150 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.026174 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.026188 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:12Z","lastTransitionTime":"2026-02-19T10:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.129134 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.129184 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.129194 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.129212 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.129222 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:12Z","lastTransitionTime":"2026-02-19T10:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.232043 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.232088 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.232098 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.232115 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.232125 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:12Z","lastTransitionTime":"2026-02-19T10:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.335053 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.335114 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.335126 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.335144 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.335155 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:12Z","lastTransitionTime":"2026-02-19T10:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.437541 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.437642 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.437668 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.437697 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.437716 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:12Z","lastTransitionTime":"2026-02-19T10:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.534209 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 05:05:28.133884059 +0000 UTC Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.540661 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.540794 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.540810 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.540832 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.540847 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:12Z","lastTransitionTime":"2026-02-19T10:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.569503 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35d4fd8e-224d-468e-b6e2-a292a6e1949f-metrics-certs\") pod \"network-metrics-daemon-n5z6r\" (UID: \"35d4fd8e-224d-468e-b6e2-a292a6e1949f\") " pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:09:12 crc kubenswrapper[4811]: E0219 10:09:12.569643 4811 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 10:09:12 crc kubenswrapper[4811]: E0219 10:09:12.569717 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35d4fd8e-224d-468e-b6e2-a292a6e1949f-metrics-certs podName:35d4fd8e-224d-468e-b6e2-a292a6e1949f nodeName:}" failed. No retries permitted until 2026-02-19 10:09:14.56970072 +0000 UTC m=+43.096165406 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/35d4fd8e-224d-468e-b6e2-a292a6e1949f-metrics-certs") pod "network-metrics-daemon-n5z6r" (UID: "35d4fd8e-224d-468e-b6e2-a292a6e1949f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.582596 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:09:12 crc kubenswrapper[4811]: E0219 10:09:12.582774 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.597833 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c18ee97d5e19f635541477ce6c6eb5fe1417551582bb34c9e0ec77a134d660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:12Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.613229 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:12Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.628407 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8ede5032c6812f2b019538d253d2c07cf1659f30fcc4c52b27e6a091fb0758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c005a321618306472457a36ad24c5550bd0d2320bd395d6c022a84a7fdebbb01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:12Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.643673 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e102b35eadbc15e015f695e0f25f51576d5a023de5326f37421223e711ac407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:12Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.644253 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.644304 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.644323 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.644341 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.644351 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:12Z","lastTransitionTime":"2026-02-19T10:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.656003 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e42c80-bece-4066-abad-05bc661d33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac23aaef2d47d83268a51b9efb514cc10a9fdb47d2304f5cabf31c2ae46abfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v97zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:12Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.670973 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:12Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.684731 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxbrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trc25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxbrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:12Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.699864 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sx75t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c742c05c-adc4-491d-994f-036af2fda249\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e49365be7e1db37aca235d55ae33418e89574caaa0ff6a67b0ee3a950e734220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f41e09b6058e874ace3c98bd66981467c7d16ba4df94700158d866e48ffb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sx75t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:12Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.716295 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:12Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.733013 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98110a48-08c0-4227-93dc-cfadaa45b044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec84e9992f678e60bd39671c08434036c0ebd6746e540b35e3b7afb638d1836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9s768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:12Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.746534 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.746581 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.746590 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.746608 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.746620 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:12Z","lastTransitionTime":"2026-02-19T10:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.759082 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"554fedad-397a-4827-b5ad-48a681a7a2e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e730a514249add09d2f93e7bf147a27367952c1716d7efe636bb7d8097961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a3eac356f8c4a4e297fee6105ed49f0993737bc216516d008365d5cdb4bf010\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"message\\\":\\\"0219 10:09:06.334248 6012 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 10:09:06.334860 6012 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 10:09:06.334897 6012 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 10:09:06.334936 6012 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 10:09:06.334949 6012 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 10:09:06.334997 6012 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 10:09:06.335017 6012 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 10:09:06.335023 6012 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 10:09:06.335036 6012 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 10:09:06.335069 6012 factory.go:656] Stopping watch factory\\\\nI0219 10:09:06.335090 6012 ovnkube.go:599] Stopped ovnkube\\\\nI0219 10:09:06.335120 6012 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 10:09:06.335133 6012 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 10:09:06.335142 6012 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 10:09:06.335152 6012 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 10:09:06.335165 6012 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://836e730a514249add09d2f93e7bf147a27367952c1716d7efe636bb7d8097961\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:07Z\\\",\\\"message\\\":\\\"7.922925 6244 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 10:09:07.923380 6244 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 10:09:07.923407 6244 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 10:09:07.923417 6244 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 10:09:07.923375 6244 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 10:09:07.923500 6244 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 10:09:07.923499 6244 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 10:09:07.923544 6244 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 10:09:07.923572 6244 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 10:09:07.923676 6244 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 10:09:07.923681 6244 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 10:09:07.923771 6244 factory.go:656] Stopping watch factory\\\\nI0219 10:09:07.923789 6244 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 10:09:07.923832 6244 ovnkube.go:599] Stopped ovnkube\\\\nI0219 10:09:07.923891 6244 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 10:09:07.924046 6244 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ln785\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:12Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.773982 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n5z6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d4fd8e-224d-468e-b6e2-a292a6e1949f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b57g5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b57g5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n5z6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:12Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.796406 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:12Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.815631 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI0219 10:08:54.897941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0219 10:08:54.897951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0219 10:08:54.898073 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771495718\\\\\\\\\\\\\\\" (2026-02-19 10:08:37 +0000 UTC to 2026-03-21 10:08:38 +0000 UTC (now=2026-02-19 10:08:54.898015179 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898163 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\"\\\\nI0219 10:08:54.898747 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771495728\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771495728\\\\\\\\\\\\\\\" (2026-02-19 09:08:48 +0000 UTC to 2027-02-19 09:08:48 +0000 UTC (now=2026-02-19 10:08:54.898701245 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898777 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0219 10:08:54.898804 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0219 10:08:54.898823 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 10:08:54.900253 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900328 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900654 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0219 10:08:54.907320 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:12Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.828784 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3788443dd8b315ed6c0e0f94f9ec1524525bde8a1bfdab4096a4af6b17155977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:12Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.842569 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nw57r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5177375f-9a2c-4430-8c2e-307ddf187cb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd28eb1f2ce842c3a00d30faed2acaaf4d4406342e9dc7f88595be430d0423ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5phq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nw57r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:12Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.849720 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.849770 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.849784 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.849807 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.849824 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:12Z","lastTransitionTime":"2026-02-19T10:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.953050 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.953122 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.953144 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.953174 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:12 crc kubenswrapper[4811]: I0219 10:09:12.953194 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:12Z","lastTransitionTime":"2026-02-19T10:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.056705 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.056758 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.056776 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.056803 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.056822 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:13Z","lastTransitionTime":"2026-02-19T10:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.159959 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.160011 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.160027 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.160046 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.160058 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:13Z","lastTransitionTime":"2026-02-19T10:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.263287 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.263398 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.263411 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.263442 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.263475 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:13Z","lastTransitionTime":"2026-02-19T10:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.366221 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.366272 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.366285 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.366305 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.366318 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:13Z","lastTransitionTime":"2026-02-19T10:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.469535 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.470120 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.470140 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.470162 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.470175 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:13Z","lastTransitionTime":"2026-02-19T10:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.535322 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 01:08:24.823892169 +0000 UTC Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.572221 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.572286 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.572310 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.572339 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.572361 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:13Z","lastTransitionTime":"2026-02-19T10:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.582101 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:09:13 crc kubenswrapper[4811]: E0219 10:09:13.582268 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.582804 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:09:13 crc kubenswrapper[4811]: E0219 10:09:13.582918 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.584011 4811 scope.go:117] "RemoveContainer" containerID="5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.585054 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:09:13 crc kubenswrapper[4811]: E0219 10:09:13.585213 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.675111 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.675156 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.675165 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.675186 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.675198 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:13Z","lastTransitionTime":"2026-02-19T10:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.779854 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.779930 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.779949 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.779983 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.780007 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:13Z","lastTransitionTime":"2026-02-19T10:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.883157 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.883201 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.883210 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.883227 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.883238 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:13Z","lastTransitionTime":"2026-02-19T10:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.985566 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.985616 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.985626 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.985642 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:13 crc kubenswrapper[4811]: I0219 10:09:13.985653 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:13Z","lastTransitionTime":"2026-02-19T10:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.087800 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.087845 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.087855 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.087870 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.087880 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:14Z","lastTransitionTime":"2026-02-19T10:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.108625 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.110312 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6a4b8138d1d6b88a57900c770e6553635cf6712d2a43061747bf2e1fe18f4a34"} Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.110703 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.131936 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:14Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.148256 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4b8138d1d6b88a57900c770e6553635cf6712d2a43061747bf2e1fe18f4a34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI0219 10:08:54.897941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0219 10:08:54.897951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0219 10:08:54.898073 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771495718\\\\\\\\\\\\\\\" (2026-02-19 10:08:37 +0000 UTC to 2026-03-21 10:08:38 +0000 UTC (now=2026-02-19 10:08:54.898015179 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898163 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\"\\\\nI0219 10:08:54.898747 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771495728\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771495728\\\\\\\\\\\\\\\" (2026-02-19 09:08:48 +0000 UTC to 2027-02-19 09:08:48 +0000 UTC (now=2026-02-19 10:08:54.898701245 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898777 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0219 10:08:54.898804 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0219 10:08:54.898823 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 10:08:54.900253 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900328 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900654 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0219 10:08:54.907320 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:14Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.161220 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3788443dd8b315ed6c0e0f94f9ec1524525bde8a1bfdab4096a4af6b17155977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:14Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.175197 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nw57r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5177375f-9a2c-4430-8c2e-307ddf187cb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd28eb1f2ce842c3a00d30faed2acaaf4d4406342e9dc7f88595be430d0423ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5phq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nw57r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:14Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.189889 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c18ee97d5e19f635541477ce6c6eb5fe1417551582bb34c9e0ec77a134d660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:14Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.190746 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.190827 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.190853 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.190885 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.190908 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:14Z","lastTransitionTime":"2026-02-19T10:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.204485 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:14Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.220526 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8ede5032c6812f2b019538d253d2c07cf1659f30fcc4c52b27e6a091fb0758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c005a321618306472457a36ad24c5550bd0d2320bd395d6c022a84a7fdebbb01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:14Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.234829 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e102b35eadbc15e015f695e0f25f51576d5a023de5326f37421223e711ac407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:14Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.251579 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e42c80-bece-4066-abad-05bc661d33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac23aaef2d47d83268a51b9efb514cc10a9fdb47d2304f5cabf31c2ae46abfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v97zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:14Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.266844 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:14Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.285560 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxbrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trc25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxbrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:14Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.293534 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.294000 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.294011 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.294030 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.294042 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:14Z","lastTransitionTime":"2026-02-19T10:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.302878 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sx75t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c742c05c-adc4-491d-994f-036af2fda249\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e49365be7e1db37aca235d55ae33418e89574caaa0ff6a67b0ee3a950e734220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f41e09b6058e874ace3c98bd66981467c7d16ba4df94700158d866e48ffb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sx75t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:14Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.315150 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:14Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.327858 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98110a48-08c0-4227-93dc-cfadaa45b044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec84e9992f678e60bd39671c08434036c0ebd6746e540b35e3b7afb638d1836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9s768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:14Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.344237 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"554fedad-397a-4827-b5ad-48a681a7a2e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e730a514249add09d2f93e7bf147a27367952c1716d7efe636bb7d8097961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a3eac356f8c4a4e297fee6105ed49f0993737bc216516d008365d5cdb4bf010\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"message\\\":\\\"0219 10:09:06.334248 6012 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 10:09:06.334860 6012 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 10:09:06.334897 6012 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 10:09:06.334936 6012 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 10:09:06.334949 6012 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 10:09:06.334997 6012 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 10:09:06.335017 6012 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 10:09:06.335023 6012 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 10:09:06.335036 6012 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 10:09:06.335069 6012 factory.go:656] Stopping watch factory\\\\nI0219 10:09:06.335090 6012 ovnkube.go:599] Stopped ovnkube\\\\nI0219 10:09:06.335120 6012 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 10:09:06.335133 6012 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 10:09:06.335142 6012 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 10:09:06.335152 6012 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 10:09:06.335165 6012 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://836e730a514249add09d2f93e7bf147a27367952c1716d7efe636bb7d8097961\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:07Z\\\",\\\"message\\\":\\\"7.922925 6244 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 10:09:07.923380 6244 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 10:09:07.923407 6244 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 10:09:07.923417 6244 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 10:09:07.923375 6244 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 10:09:07.923500 6244 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 10:09:07.923499 6244 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 10:09:07.923544 6244 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 10:09:07.923572 6244 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 10:09:07.923676 6244 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 10:09:07.923681 6244 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 10:09:07.923771 6244 factory.go:656] Stopping watch factory\\\\nI0219 10:09:07.923789 6244 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 10:09:07.923832 6244 ovnkube.go:599] Stopped ovnkube\\\\nI0219 10:09:07.923891 6244 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 10:09:07.924046 6244 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ln785\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:14Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.353941 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n5z6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d4fd8e-224d-468e-b6e2-a292a6e1949f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b57g5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b57g5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n5z6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:14Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.397168 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.397216 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.397227 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.397243 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.397254 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:14Z","lastTransitionTime":"2026-02-19T10:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.499532 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.499572 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.499583 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.499597 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.499608 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:14Z","lastTransitionTime":"2026-02-19T10:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.535959 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 05:57:38.972568727 +0000 UTC Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.582655 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:09:14 crc kubenswrapper[4811]: E0219 10:09:14.582843 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.587475 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35d4fd8e-224d-468e-b6e2-a292a6e1949f-metrics-certs\") pod \"network-metrics-daemon-n5z6r\" (UID: \"35d4fd8e-224d-468e-b6e2-a292a6e1949f\") " pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:09:14 crc kubenswrapper[4811]: E0219 10:09:14.587626 4811 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 10:09:14 crc kubenswrapper[4811]: E0219 10:09:14.587692 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35d4fd8e-224d-468e-b6e2-a292a6e1949f-metrics-certs podName:35d4fd8e-224d-468e-b6e2-a292a6e1949f nodeName:}" failed. No retries permitted until 2026-02-19 10:09:18.587678112 +0000 UTC m=+47.114142798 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/35d4fd8e-224d-468e-b6e2-a292a6e1949f-metrics-certs") pod "network-metrics-daemon-n5z6r" (UID: "35d4fd8e-224d-468e-b6e2-a292a6e1949f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.602559 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.602612 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.602623 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.602642 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.602656 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:14Z","lastTransitionTime":"2026-02-19T10:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.705798 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.705858 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.705872 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.705894 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.706146 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:14Z","lastTransitionTime":"2026-02-19T10:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.810319 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.810405 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.810426 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.810488 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.810515 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:14Z","lastTransitionTime":"2026-02-19T10:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.913943 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.914000 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.914010 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.914025 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:14 crc kubenswrapper[4811]: I0219 10:09:14.914040 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:14Z","lastTransitionTime":"2026-02-19T10:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.018026 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.018080 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.018093 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.018120 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.018139 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:15Z","lastTransitionTime":"2026-02-19T10:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.120731 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.120779 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.120791 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.120808 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.120819 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:15Z","lastTransitionTime":"2026-02-19T10:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.223654 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.223718 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.223737 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.223760 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.223780 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:15Z","lastTransitionTime":"2026-02-19T10:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.327670 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.327710 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.327720 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.327741 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.327752 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:15Z","lastTransitionTime":"2026-02-19T10:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.431026 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.431088 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.431109 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.431139 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.431160 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:15Z","lastTransitionTime":"2026-02-19T10:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.533913 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.533975 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.533987 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.534011 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.534025 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:15Z","lastTransitionTime":"2026-02-19T10:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.536269 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 00:35:56.609384284 +0000 UTC Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.582713 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.582740 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:09:15 crc kubenswrapper[4811]: E0219 10:09:15.582896 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.582740 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:09:15 crc kubenswrapper[4811]: E0219 10:09:15.582982 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:09:15 crc kubenswrapper[4811]: E0219 10:09:15.583045 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.637550 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.637619 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.637636 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.637669 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.637691 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:15Z","lastTransitionTime":"2026-02-19T10:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.740501 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.740549 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.740566 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.740589 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.740606 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:15Z","lastTransitionTime":"2026-02-19T10:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.844370 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.844416 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.844430 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.844463 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.844474 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:15Z","lastTransitionTime":"2026-02-19T10:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.947028 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.947079 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.947092 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.947110 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:15 crc kubenswrapper[4811]: I0219 10:09:15.947122 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:15Z","lastTransitionTime":"2026-02-19T10:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.050700 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.050791 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.050815 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.050856 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.050887 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:16Z","lastTransitionTime":"2026-02-19T10:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.153974 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.154038 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.154055 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.154080 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.154098 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:16Z","lastTransitionTime":"2026-02-19T10:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.220882 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.220935 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.220947 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.220973 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.220987 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:16Z","lastTransitionTime":"2026-02-19T10:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:16 crc kubenswrapper[4811]: E0219 10:09:16.240071 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aa043662-c579-48da-bd33-d995796f09b6\\\",\\\"systemUUID\\\":\\\"0ae76a21-a2ee-4264-8d70-e6dc0f503661\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:16Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.245412 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.245487 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.245501 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.245521 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.245539 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:16Z","lastTransitionTime":"2026-02-19T10:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:16 crc kubenswrapper[4811]: E0219 10:09:16.263875 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aa043662-c579-48da-bd33-d995796f09b6\\\",\\\"systemUUID\\\":\\\"0ae76a21-a2ee-4264-8d70-e6dc0f503661\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:16Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.272424 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.272508 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.272520 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.272539 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.272551 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:16Z","lastTransitionTime":"2026-02-19T10:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:16 crc kubenswrapper[4811]: E0219 10:09:16.286008 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aa043662-c579-48da-bd33-d995796f09b6\\\",\\\"systemUUID\\\":\\\"0ae76a21-a2ee-4264-8d70-e6dc0f503661\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:16Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.290808 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.290863 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.290880 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.290907 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.290924 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:16Z","lastTransitionTime":"2026-02-19T10:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:16 crc kubenswrapper[4811]: E0219 10:09:16.306385 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aa043662-c579-48da-bd33-d995796f09b6\\\",\\\"systemUUID\\\":\\\"0ae76a21-a2ee-4264-8d70-e6dc0f503661\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:16Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.311899 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.311942 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.311955 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.311969 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.311979 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:16Z","lastTransitionTime":"2026-02-19T10:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:16 crc kubenswrapper[4811]: E0219 10:09:16.324027 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aa043662-c579-48da-bd33-d995796f09b6\\\",\\\"systemUUID\\\":\\\"0ae76a21-a2ee-4264-8d70-e6dc0f503661\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:16Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:16 crc kubenswrapper[4811]: E0219 10:09:16.324146 4811 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.325940 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.325989 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.326007 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.326025 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.326034 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:16Z","lastTransitionTime":"2026-02-19T10:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.429018 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.429071 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.429086 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.429106 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.429121 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:16Z","lastTransitionTime":"2026-02-19T10:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.532481 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.532559 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.532575 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.532602 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.532619 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:16Z","lastTransitionTime":"2026-02-19T10:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.536729 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 18:15:41.895584721 +0000 UTC Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.583244 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:09:16 crc kubenswrapper[4811]: E0219 10:09:16.583578 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.635330 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.635381 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.635390 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.635412 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.635424 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:16Z","lastTransitionTime":"2026-02-19T10:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.738249 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.738304 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.738317 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.738337 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.738348 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:16Z","lastTransitionTime":"2026-02-19T10:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.841421 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.841567 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.841599 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.841625 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.841647 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:16Z","lastTransitionTime":"2026-02-19T10:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.944615 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.944679 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.944695 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.944717 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:16 crc kubenswrapper[4811]: I0219 10:09:16.944732 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:16Z","lastTransitionTime":"2026-02-19T10:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.047709 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.047767 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.047782 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.047801 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.047813 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:17Z","lastTransitionTime":"2026-02-19T10:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.150193 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.150278 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.150293 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.150325 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.150348 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:17Z","lastTransitionTime":"2026-02-19T10:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.253030 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.253070 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.253081 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.253097 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.253111 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:17Z","lastTransitionTime":"2026-02-19T10:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.355707 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.355760 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.355772 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.355789 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.355802 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:17Z","lastTransitionTime":"2026-02-19T10:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.458706 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.458774 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.458788 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.458811 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.458846 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:17Z","lastTransitionTime":"2026-02-19T10:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.537881 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 06:34:35.765609092 +0000 UTC Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.562308 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.562363 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.562387 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.562412 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.562432 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:17Z","lastTransitionTime":"2026-02-19T10:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.582719 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.582761 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.582948 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:09:17 crc kubenswrapper[4811]: E0219 10:09:17.583092 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:09:17 crc kubenswrapper[4811]: E0219 10:09:17.583232 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:09:17 crc kubenswrapper[4811]: E0219 10:09:17.583306 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.665253 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.665300 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.665309 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.665326 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.665337 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:17Z","lastTransitionTime":"2026-02-19T10:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.767956 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.768027 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.768057 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.768104 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.768134 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:17Z","lastTransitionTime":"2026-02-19T10:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.870628 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.870705 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.870724 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.870756 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.870777 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:17Z","lastTransitionTime":"2026-02-19T10:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.974427 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.974528 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.974550 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.974580 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:17 crc kubenswrapper[4811]: I0219 10:09:17.974600 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:17Z","lastTransitionTime":"2026-02-19T10:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.077065 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.077122 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.077134 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.077157 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.077169 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:18Z","lastTransitionTime":"2026-02-19T10:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.180806 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.180881 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.180895 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.180909 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.180921 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:18Z","lastTransitionTime":"2026-02-19T10:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.284154 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.284227 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.284244 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.284274 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.284310 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:18Z","lastTransitionTime":"2026-02-19T10:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.387665 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.387724 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.387748 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.387776 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.387798 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:18Z","lastTransitionTime":"2026-02-19T10:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.490317 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.490356 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.490364 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.490379 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.490388 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:18Z","lastTransitionTime":"2026-02-19T10:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.538271 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 06:45:53.341743122 +0000 UTC Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.582505 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:09:18 crc kubenswrapper[4811]: E0219 10:09:18.582646 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.592498 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.592616 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.592642 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.592721 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.592787 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:18Z","lastTransitionTime":"2026-02-19T10:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.636761 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35d4fd8e-224d-468e-b6e2-a292a6e1949f-metrics-certs\") pod \"network-metrics-daemon-n5z6r\" (UID: \"35d4fd8e-224d-468e-b6e2-a292a6e1949f\") " pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:09:18 crc kubenswrapper[4811]: E0219 10:09:18.637018 4811 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 10:09:18 crc kubenswrapper[4811]: E0219 10:09:18.637130 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35d4fd8e-224d-468e-b6e2-a292a6e1949f-metrics-certs podName:35d4fd8e-224d-468e-b6e2-a292a6e1949f nodeName:}" failed. No retries permitted until 2026-02-19 10:09:26.637099441 +0000 UTC m=+55.163564177 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/35d4fd8e-224d-468e-b6e2-a292a6e1949f-metrics-certs") pod "network-metrics-daemon-n5z6r" (UID: "35d4fd8e-224d-468e-b6e2-a292a6e1949f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.696473 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.696554 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.696570 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.696599 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.696622 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:18Z","lastTransitionTime":"2026-02-19T10:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.799755 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.799815 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.799825 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.799846 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.799861 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:18Z","lastTransitionTime":"2026-02-19T10:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.883628 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.884896 4811 scope.go:117] "RemoveContainer" containerID="836e730a514249add09d2f93e7bf147a27367952c1716d7efe636bb7d8097961" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.899436 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3788443dd8b315ed6c0e0f94f9ec1524525bde8a1bfdab4096a4af6b17155977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:18Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.902354 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.902403 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.902416 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.902439 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.902476 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:18Z","lastTransitionTime":"2026-02-19T10:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.913738 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nw57r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5177375f-9a2c-4430-8c2e-307ddf187cb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd28eb1f2ce842c3a00d30faed2acaaf4d4406342e9dc7f88595be430d0423ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5phq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nw57r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:18Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.934400 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:18Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.951576 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4b8138d1d6b88a57900c770e6553635cf6712d2a43061747bf2e1fe18f4a34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI0219 10:08:54.897941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0219 10:08:54.897951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0219 10:08:54.898073 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771495718\\\\\\\\\\\\\\\" (2026-02-19 10:08:37 +0000 UTC to 2026-03-21 10:08:38 +0000 UTC (now=2026-02-19 10:08:54.898015179 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898163 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\"\\\\nI0219 10:08:54.898747 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771495728\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771495728\\\\\\\\\\\\\\\" (2026-02-19 09:08:48 +0000 UTC to 2027-02-19 09:08:48 +0000 UTC (now=2026-02-19 10:08:54.898701245 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898777 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0219 10:08:54.898804 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0219 10:08:54.898823 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 10:08:54.900253 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900328 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900654 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0219 10:08:54.907320 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:18Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.966965 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8ede5032c6812f2b019538d253d2c07cf1659f30fcc4c52b27e6a091fb0758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c005a321618306472457a36ad24c5550bd0d2320bd395d6c022a84a7fdebbb01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:18Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.980739 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e102b35eadbc15e015f695e0f25f51576d5a023de5326f37421223e711ac407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:18Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:18 crc kubenswrapper[4811]: I0219 10:09:18.995570 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e42c80-bece-4066-abad-05bc661d33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac23aaef2d47d83268a51b9efb514cc10a9fdb47d2304f5cabf31c2ae46abfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v97zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:18Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.010470 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.010520 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.010531 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.010551 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.010563 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:19Z","lastTransitionTime":"2026-02-19T10:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.011008 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c18ee97d5e19f635541477ce6c6eb5fe1417551582bb34c9e0ec77a134d660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:19Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.026827 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:19Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.043864 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxbrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trc25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxbrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:19Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.058657 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sx75t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c742c05c-adc4-491d-994f-036af2fda249\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e49365be7e1db37aca235d55ae33418e89574caaa0ff6a67b0ee3a950e734220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f41e09b6058e874ace3c98bd66981467c7d16ba4df94700158d866e48ffb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sx75t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:19Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.076376 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:19Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.097090 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98110a48-08c0-4227-93dc-cfadaa45b044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec84e9992f678e60bd39671c08434036c0ebd6746e540b35e3b7afb638d1836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9s768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:19Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.113466 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.113505 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.113514 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.113531 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.113541 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:19Z","lastTransitionTime":"2026-02-19T10:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.121327 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"554fedad-397a-4827-b5ad-48a681a7a2e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e730a514249add09d2f93e7bf147a27367952c1716d7efe636bb7d8097961\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://836e730a514249add09d2f93e7bf147a27367952c1716d7efe636bb7d8097961\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:07Z\\\",\\\"message\\\":\\\"7.922925 6244 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 10:09:07.923380 6244 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 10:09:07.923407 6244 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 10:09:07.923417 6244 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 10:09:07.923375 6244 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 10:09:07.923500 6244 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 10:09:07.923499 6244 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 10:09:07.923544 6244 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 10:09:07.923572 6244 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 10:09:07.923676 6244 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 10:09:07.923681 6244 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 10:09:07.923771 6244 factory.go:656] Stopping watch factory\\\\nI0219 10:09:07.923789 6244 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 10:09:07.923832 6244 ovnkube.go:599] Stopped ovnkube\\\\nI0219 10:09:07.923891 6244 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 10:09:07.924046 6244 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ln785_openshift-ovn-kubernetes(554fedad-397a-4827-b5ad-48a681a7a2e5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ln785\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:19Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.129891 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ln785_554fedad-397a-4827-b5ad-48a681a7a2e5/ovnkube-controller/1.log" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.133188 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" event={"ID":"554fedad-397a-4827-b5ad-48a681a7a2e5","Type":"ContainerStarted","Data":"191613fba6a5b20856a4112533786ea84b04713389b23a1a6bedac4be281c5ba"} Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.134400 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.139019 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n5z6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d4fd8e-224d-468e-b6e2-a292a6e1949f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b57g5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b57g5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n5z6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:19Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.154625 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:19Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.169095 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:19Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.181652 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxbrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trc25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxbrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:19Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.192633 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sx75t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c742c05c-adc4-491d-994f-036af2fda249\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e49365be7e1db37aca235d55ae33418e89574caaa0ff6a67b0ee3a950e734220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f41e09b6058e874ace3c98bd66981467c7d16ba4df94700158d866e48ffb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sx75t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:19Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.205566 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:19Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.216561 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.216651 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.216660 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.216675 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.216685 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:19Z","lastTransitionTime":"2026-02-19T10:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.221864 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98110a48-08c0-4227-93dc-cfadaa45b044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec84e9992f678e60bd39671c08434036c0ebd6746e540b35e3b7afb638d1836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9s768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:19Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.244292 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"554fedad-397a-4827-b5ad-48a681a7a2e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://191613fba6a5b20856a4112533786ea84b04713389b23a1a6bedac4be281c5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://836e730a514249add09d2f93e7bf147a27367952c1716d7efe636bb7d8097961\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:07Z\\\",\\\"message\\\":\\\"7.922925 6244 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 10:09:07.923380 6244 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 10:09:07.923407 6244 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 10:09:07.923417 6244 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 10:09:07.923375 6244 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 10:09:07.923500 6244 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 10:09:07.923499 6244 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 10:09:07.923544 6244 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 10:09:07.923572 6244 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 10:09:07.923676 6244 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 10:09:07.923681 6244 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 10:09:07.923771 6244 factory.go:656] Stopping watch factory\\\\nI0219 10:09:07.923789 6244 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 10:09:07.923832 6244 ovnkube.go:599] Stopped ovnkube\\\\nI0219 10:09:07.923891 6244 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 10:09:07.924046 6244 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ln785\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:19Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.273846 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n5z6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d4fd8e-224d-468e-b6e2-a292a6e1949f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b57g5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b57g5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n5z6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:19Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.289799 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:19Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.313793 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4b8138d1d6b88a57900c770e6553635cf6712d2a43061747bf2e1fe18f4a34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI0219 10:08:54.897941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0219 10:08:54.897951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0219 10:08:54.898073 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771495718\\\\\\\\\\\\\\\" (2026-02-19 10:08:37 +0000 UTC to 2026-03-21 10:08:38 +0000 UTC (now=2026-02-19 10:08:54.898015179 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898163 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\"\\\\nI0219 10:08:54.898747 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771495728\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771495728\\\\\\\\\\\\\\\" (2026-02-19 09:08:48 +0000 UTC to 2027-02-19 09:08:48 +0000 UTC (now=2026-02-19 10:08:54.898701245 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898777 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0219 10:08:54.898804 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0219 10:08:54.898823 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 10:08:54.900253 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900328 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900654 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0219 10:08:54.907320 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:19Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.319163 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.319186 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.319196 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.319213 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.319226 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:19Z","lastTransitionTime":"2026-02-19T10:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.328185 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3788443dd8b315ed6c0e0f94f9ec1524525bde8a1bfdab4096a4af6b17155977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:19Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.341184 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nw57r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5177375f-9a2c-4430-8c2e-307ddf187cb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd28eb1f2ce842c3a00d30faed2acaaf4d4406342e9dc7f88595be430d0423ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5phq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nw57r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:19Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.355144 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c18ee97d5e19f635541477ce6c6eb5fe1417551582bb34c9e0ec77a134d660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:19Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.367649 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:19Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.385625 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8ede5032c6812f2b019538d253d2c07cf1659f30fcc4c52b27e6a091fb0758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c005a321618306472457a36ad24c5550bd0d2320bd395d6c022a84a7fdebbb01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:19Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.405183 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e102b35eadbc15e015f695e0f25f51576d5a023de5326f37421223e711ac407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:19Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.418510 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e42c80-bece-4066-abad-05bc661d33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac23aaef2d47d83268a51b9efb514cc10a9fdb47d2304f5cabf31c2ae46abfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v97zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:19Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.421508 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.421562 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.421577 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.421598 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.421614 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:19Z","lastTransitionTime":"2026-02-19T10:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.524120 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.524152 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.524161 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.524176 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.524184 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:19Z","lastTransitionTime":"2026-02-19T10:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.538785 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 00:34:00.330516527 +0000 UTC Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.583106 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.583207 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:09:19 crc kubenswrapper[4811]: E0219 10:09:19.583278 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:09:19 crc kubenswrapper[4811]: E0219 10:09:19.583404 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.583536 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:09:19 crc kubenswrapper[4811]: E0219 10:09:19.583597 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.627793 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.627838 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.627854 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.627875 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.627888 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:19Z","lastTransitionTime":"2026-02-19T10:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.730325 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.730358 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.730368 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.730381 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.730389 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:19Z","lastTransitionTime":"2026-02-19T10:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.832719 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.832780 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.832798 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.832820 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.832839 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:19Z","lastTransitionTime":"2026-02-19T10:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.935174 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.935221 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.935229 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.935248 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:19 crc kubenswrapper[4811]: I0219 10:09:19.935259 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:19Z","lastTransitionTime":"2026-02-19T10:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.038116 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.038388 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.038478 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.038555 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.038621 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:20Z","lastTransitionTime":"2026-02-19T10:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.141223 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.141281 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.141294 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.141314 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.141327 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:20Z","lastTransitionTime":"2026-02-19T10:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.244486 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.244528 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.244540 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.244558 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.244569 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:20Z","lastTransitionTime":"2026-02-19T10:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.348395 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.348475 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.348491 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.348519 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.348533 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:20Z","lastTransitionTime":"2026-02-19T10:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.450868 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.450930 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.450939 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.450958 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.450970 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:20Z","lastTransitionTime":"2026-02-19T10:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.539236 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 07:35:25.864838857 +0000 UTC Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.553277 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.553334 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.553350 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.553372 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.553387 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:20Z","lastTransitionTime":"2026-02-19T10:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.583066 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:09:20 crc kubenswrapper[4811]: E0219 10:09:20.583234 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.656611 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.656667 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.656678 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.656701 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.656717 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:20Z","lastTransitionTime":"2026-02-19T10:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.759982 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.760043 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.760054 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.760076 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.760089 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:20Z","lastTransitionTime":"2026-02-19T10:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.862878 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.862931 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.862945 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.862966 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.862978 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:20Z","lastTransitionTime":"2026-02-19T10:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.965858 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.965910 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.965920 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.965935 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:20 crc kubenswrapper[4811]: I0219 10:09:20.965945 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:20Z","lastTransitionTime":"2026-02-19T10:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.069066 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.069118 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.069129 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.069149 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.069163 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:21Z","lastTransitionTime":"2026-02-19T10:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.142928 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ln785_554fedad-397a-4827-b5ad-48a681a7a2e5/ovnkube-controller/2.log" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.144166 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ln785_554fedad-397a-4827-b5ad-48a681a7a2e5/ovnkube-controller/1.log" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.147877 4811 generic.go:334] "Generic (PLEG): container finished" podID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerID="191613fba6a5b20856a4112533786ea84b04713389b23a1a6bedac4be281c5ba" exitCode=1 Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.147954 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" event={"ID":"554fedad-397a-4827-b5ad-48a681a7a2e5","Type":"ContainerDied","Data":"191613fba6a5b20856a4112533786ea84b04713389b23a1a6bedac4be281c5ba"} Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.148019 4811 scope.go:117] "RemoveContainer" containerID="836e730a514249add09d2f93e7bf147a27367952c1716d7efe636bb7d8097961" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.149273 4811 scope.go:117] "RemoveContainer" containerID="191613fba6a5b20856a4112533786ea84b04713389b23a1a6bedac4be281c5ba" Feb 19 10:09:21 crc kubenswrapper[4811]: E0219 10:09:21.149621 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ln785_openshift-ovn-kubernetes(554fedad-397a-4827-b5ad-48a681a7a2e5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.172361 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.172436 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.172487 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.172524 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.172545 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:21Z","lastTransitionTime":"2026-02-19T10:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.177008 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:21Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.197713 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxbrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trc25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxbrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:21Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.212298 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sx75t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c742c05c-adc4-491d-994f-036af2fda249\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e49365be7e1db37aca235d55ae33418e89574caaa0ff6a67b0ee3a950e734220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f41e09b6058e874ace3c98bd66981467c7d16ba4df94700158d866e48ffb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sx75t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:21Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.234258 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:21Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.261332 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98110a48-08c0-4227-93dc-cfadaa45b044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec84e9992f678e60bd39671c08434036c0ebd6746e540b35e3b7afb638d1836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9s768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:21Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.276050 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.276119 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.276131 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.276149 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.276164 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:21Z","lastTransitionTime":"2026-02-19T10:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.284995 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"554fedad-397a-4827-b5ad-48a681a7a2e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://191613fba6a5b20856a4112533786ea84b04713389b23a1a6bedac4be281c5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://836e730a514249add09d2f93e7bf147a27367952c1716d7efe636bb7d8097961\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:07Z\\\",\\\"message\\\":\\\"7.922925 6244 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 10:09:07.923380 6244 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 10:09:07.923407 6244 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 10:09:07.923417 6244 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 10:09:07.923375 6244 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 10:09:07.923500 6244 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 10:09:07.923499 6244 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 10:09:07.923544 6244 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 10:09:07.923572 6244 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 10:09:07.923676 6244 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 10:09:07.923681 6244 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 10:09:07.923771 6244 factory.go:656] Stopping watch factory\\\\nI0219 10:09:07.923789 6244 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 10:09:07.923832 6244 ovnkube.go:599] Stopped ovnkube\\\\nI0219 10:09:07.923891 6244 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 10:09:07.924046 6244 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191613fba6a5b20856a4112533786ea84b04713389b23a1a6bedac4be281c5ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:20Z\\\",\\\"message\\\":\\\"3] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 10:09:20.139644 6457 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 10:09:20.139655 6457 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0219 10:09:20.139663 6457 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0219 10:09:20.139670 6457 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 10:09:20.139631 6457 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0219 10:09:20.139684 6457 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 10:09:20.139696 6457 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 398.819µs)\\\\nF0219 10:09:20.139608 6457 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInfo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ln785\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:21Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.320778 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n5z6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d4fd8e-224d-468e-b6e2-a292a6e1949f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b57g5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b57g5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n5z6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:21Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.344031 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:21Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.357784 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4b8138d1d6b88a57900c770e6553635cf6712d2a43061747bf2e1fe18f4a34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI0219 10:08:54.897941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0219 10:08:54.897951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0219 10:08:54.898073 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771495718\\\\\\\\\\\\\\\" (2026-02-19 10:08:37 +0000 UTC to 2026-03-21 10:08:38 +0000 UTC (now=2026-02-19 10:08:54.898015179 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898163 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\"\\\\nI0219 10:08:54.898747 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771495728\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771495728\\\\\\\\\\\\\\\" (2026-02-19 09:08:48 +0000 UTC to 2027-02-19 09:08:48 +0000 UTC (now=2026-02-19 10:08:54.898701245 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898777 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0219 10:08:54.898804 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0219 10:08:54.898823 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 10:08:54.900253 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900328 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900654 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0219 10:08:54.907320 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:21Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.369737 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3788443dd8b315ed6c0e0f94f9ec1524525bde8a1bfdab4096a4af6b17155977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:21Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.378576 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.378631 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.378658 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.378674 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.378705 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:21Z","lastTransitionTime":"2026-02-19T10:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.391804 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nw57r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5177375f-9a2c-4430-8c2e-307ddf187cb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd28eb1f2ce842c3a00d30faed2acaaf4d4406342e9dc7f88595be430d0423ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5phq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nw57r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:21Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.404263 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c18ee97d5e19f635541477ce6c6eb5fe1417551582bb34c9e0ec77a134d660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:21Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.415429 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:21Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.425980 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8ede5032c6812f2b019538d253d2c07cf1659f30fcc4c52b27e6a091fb0758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c005a321618306472457a36ad24c5550bd0d2320bd395d6c022a84a7fdebbb01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:21Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.436806 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e102b35eadbc15e015f695e0f25f51576d5a023de5326f37421223e711ac407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:21Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.447711 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e42c80-bece-4066-abad-05bc661d33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac23aaef2d47d83268a51b9efb514cc10a9fdb47d2304f5cabf31c2ae46abfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v97zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:21Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.481288 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.481329 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.481340 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.481357 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.481368 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:21Z","lastTransitionTime":"2026-02-19T10:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.540253 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 15:50:09.65298025 +0000 UTC Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.582503 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.582507 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.582636 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:09:21 crc kubenswrapper[4811]: E0219 10:09:21.582757 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:09:21 crc kubenswrapper[4811]: E0219 10:09:21.582827 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:09:21 crc kubenswrapper[4811]: E0219 10:09:21.582881 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.584550 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.584580 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.584590 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.584606 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.584618 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:21Z","lastTransitionTime":"2026-02-19T10:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.687408 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.687502 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.687518 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.687540 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.687690 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:21Z","lastTransitionTime":"2026-02-19T10:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.790694 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.790744 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.790753 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.790768 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.790777 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:21Z","lastTransitionTime":"2026-02-19T10:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.892583 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.892620 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.892632 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.892648 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.892657 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:21Z","lastTransitionTime":"2026-02-19T10:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.995601 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.995659 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.995672 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.995695 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:21 crc kubenswrapper[4811]: I0219 10:09:21.995711 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:21Z","lastTransitionTime":"2026-02-19T10:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.098662 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.098705 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.098715 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.098733 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.098746 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:22Z","lastTransitionTime":"2026-02-19T10:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.152835 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ln785_554fedad-397a-4827-b5ad-48a681a7a2e5/ovnkube-controller/2.log" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.202167 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.202198 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.202207 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.202223 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.202233 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:22Z","lastTransitionTime":"2026-02-19T10:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.305259 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.305328 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.305338 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.305363 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.305375 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:22Z","lastTransitionTime":"2026-02-19T10:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.408606 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.408663 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.408674 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.408695 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.408706 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:22Z","lastTransitionTime":"2026-02-19T10:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.512249 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.512315 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.512324 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.512347 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.512361 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:22Z","lastTransitionTime":"2026-02-19T10:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.540485 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 05:18:05.257795655 +0000 UTC Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.583352 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:09:22 crc kubenswrapper[4811]: E0219 10:09:22.583653 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.600518 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c18ee97d5e19f635541477ce6c6eb5fe1417551582bb34c9e0ec77a134d660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:22Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.615438 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.615483 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.615493 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.615509 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.615518 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:22Z","lastTransitionTime":"2026-02-19T10:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.617272 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:22Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.632482 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8ede5032c6812f2b019538d253d2c07cf1659f30fcc4c52b27e6a091fb0758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c005a321618306472457a36ad24c5550bd0d2320bd395d6c022a84a7fdebbb01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:22Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.649568 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e102b35eadbc15e015f695e0f25f51576d5a023de5326f37421223e711ac407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:22Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.666441 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e42c80-bece-4066-abad-05bc661d33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac23aaef2d47d83268a51b9efb514cc10a9fdb47d2304f5cabf31c2ae46abfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v97zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:22Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.684387 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:22Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.700462 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxbrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trc25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxbrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:22Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.713557 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sx75t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c742c05c-adc4-491d-994f-036af2fda249\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e49365be7e1db37aca235d55ae33418e89574caaa0ff6a67b0ee3a950e734220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f41e09b6058e874ace3c98bd66981467c7d16ba4df94700158d866e48ffb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sx75t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:22Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.718245 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.718302 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.718316 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.718334 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.718346 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:22Z","lastTransitionTime":"2026-02-19T10:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.729935 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:22Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.748287 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98110a48-08c0-4227-93dc-cfadaa45b044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec84e9992f678e60bd39671c08434036c0ebd6746e540b35e3b7afb638d1836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9s768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:22Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.767488 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"554fedad-397a-4827-b5ad-48a681a7a2e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://191613fba6a5b20856a4112533786ea84b04713389b23a1a6bedac4be281c5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://836e730a514249add09d2f93e7bf147a27367952c1716d7efe636bb7d8097961\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:07Z\\\",\\\"message\\\":\\\"7.922925 6244 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 10:09:07.923380 6244 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 10:09:07.923407 6244 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 10:09:07.923417 6244 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 10:09:07.923375 6244 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 10:09:07.923500 6244 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 10:09:07.923499 6244 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 10:09:07.923544 6244 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 10:09:07.923572 6244 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 10:09:07.923676 6244 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 10:09:07.923681 6244 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 10:09:07.923771 6244 factory.go:656] Stopping watch factory\\\\nI0219 10:09:07.923789 6244 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 10:09:07.923832 6244 ovnkube.go:599] Stopped ovnkube\\\\nI0219 10:09:07.923891 6244 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 10:09:07.924046 6244 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191613fba6a5b20856a4112533786ea84b04713389b23a1a6bedac4be281c5ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:20Z\\\",\\\"message\\\":\\\"3] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 10:09:20.139644 6457 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 10:09:20.139655 6457 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0219 10:09:20.139663 6457 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0219 10:09:20.139670 6457 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 10:09:20.139631 6457 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0219 10:09:20.139684 6457 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 10:09:20.139696 6457 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 398.819µs)\\\\nF0219 10:09:20.139608 6457 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInfo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ln785\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:22Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.778050 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n5z6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d4fd8e-224d-468e-b6e2-a292a6e1949f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b57g5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b57g5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n5z6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:22Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.789555 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:22Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.805756 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4b8138d1d6b88a57900c770e6553635cf6712d2a43061747bf2e1fe18f4a34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI0219 10:08:54.897941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0219 10:08:54.897951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0219 10:08:54.898073 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771495718\\\\\\\\\\\\\\\" (2026-02-19 10:08:37 +0000 UTC to 2026-03-21 10:08:38 +0000 UTC (now=2026-02-19 10:08:54.898015179 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898163 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\"\\\\nI0219 10:08:54.898747 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771495728\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771495728\\\\\\\\\\\\\\\" (2026-02-19 09:08:48 +0000 UTC to 2027-02-19 09:08:48 +0000 UTC (now=2026-02-19 10:08:54.898701245 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898777 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0219 10:08:54.898804 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0219 10:08:54.898823 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 10:08:54.900253 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900328 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900654 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0219 10:08:54.907320 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:22Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.818861 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3788443dd8b315ed6c0e0f94f9ec1524525bde8a1bfdab4096a4af6b17155977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:22Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.821432 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.821480 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.821490 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.821508 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.821520 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:22Z","lastTransitionTime":"2026-02-19T10:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.828369 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nw57r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5177375f-9a2c-4430-8c2e-307ddf187cb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd28eb1f2ce842c3a00d30faed2acaaf4d4406342e9dc7f88595be430d0423ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5phq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nw57r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:22Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.923920 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.924404 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.924676 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.924914 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:22 crc kubenswrapper[4811]: I0219 10:09:22.925126 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:22Z","lastTransitionTime":"2026-02-19T10:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.027847 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.028144 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.028215 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.028323 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.028404 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:23Z","lastTransitionTime":"2026-02-19T10:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.130837 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.130895 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.130911 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.130937 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.130954 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:23Z","lastTransitionTime":"2026-02-19T10:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.233038 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.233364 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.234341 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.234383 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.234402 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:23Z","lastTransitionTime":"2026-02-19T10:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.337573 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.337659 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.337679 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.337729 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.337763 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:23Z","lastTransitionTime":"2026-02-19T10:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.440113 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.440159 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.440171 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.440194 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.440207 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:23Z","lastTransitionTime":"2026-02-19T10:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.541034 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 23:07:11.1853851 +0000 UTC Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.542994 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.543027 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.543035 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.543052 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.543065 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:23Z","lastTransitionTime":"2026-02-19T10:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.582773 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.582844 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.582840 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:09:23 crc kubenswrapper[4811]: E0219 10:09:23.582964 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:09:23 crc kubenswrapper[4811]: E0219 10:09:23.583094 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:09:23 crc kubenswrapper[4811]: E0219 10:09:23.583316 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.645325 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.645369 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.645380 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.645398 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.645409 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:23Z","lastTransitionTime":"2026-02-19T10:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.748018 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.748063 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.748074 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.748089 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.748099 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:23Z","lastTransitionTime":"2026-02-19T10:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.851372 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.851431 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.851494 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.851527 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.851548 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:23Z","lastTransitionTime":"2026-02-19T10:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.954691 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.954848 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.954871 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.954970 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:23 crc kubenswrapper[4811]: I0219 10:09:23.954990 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:23Z","lastTransitionTime":"2026-02-19T10:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.058750 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.058814 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.058836 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.058864 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.058883 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:24Z","lastTransitionTime":"2026-02-19T10:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.161496 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.161560 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.161579 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.161602 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.161619 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:24Z","lastTransitionTime":"2026-02-19T10:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.265553 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.265623 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.265634 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.265654 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.265671 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:24Z","lastTransitionTime":"2026-02-19T10:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.368439 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.368501 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.368511 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.368532 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.368546 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:24Z","lastTransitionTime":"2026-02-19T10:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.471890 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.471975 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.471992 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.472020 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.472042 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:24Z","lastTransitionTime":"2026-02-19T10:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.541691 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 11:41:40.204775441 +0000 UTC Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.575087 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.575156 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.575170 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.575190 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.575222 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:24Z","lastTransitionTime":"2026-02-19T10:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.582610 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:09:24 crc kubenswrapper[4811]: E0219 10:09:24.582780 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.678184 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.678231 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.678243 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.678263 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.678275 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:24Z","lastTransitionTime":"2026-02-19T10:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.780507 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.780564 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.780580 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.780605 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.780622 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:24Z","lastTransitionTime":"2026-02-19T10:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.883943 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.884004 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.884023 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.884047 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.884063 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:24Z","lastTransitionTime":"2026-02-19T10:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.986668 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.986745 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.986759 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.986776 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:24 crc kubenswrapper[4811]: I0219 10:09:24.986788 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:24Z","lastTransitionTime":"2026-02-19T10:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.089812 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.089863 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.089882 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.089907 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.089923 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:25Z","lastTransitionTime":"2026-02-19T10:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.192255 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.192321 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.192337 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.192359 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.192374 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:25Z","lastTransitionTime":"2026-02-19T10:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.296175 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.296252 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.296272 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.296305 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.296333 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:25Z","lastTransitionTime":"2026-02-19T10:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.399811 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.399857 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.399871 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.399892 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.399910 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:25Z","lastTransitionTime":"2026-02-19T10:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.502826 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.502897 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.502910 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.502933 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.502952 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:25Z","lastTransitionTime":"2026-02-19T10:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.542443 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 13:34:30.829317041 +0000 UTC Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.582117 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.582181 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:09:25 crc kubenswrapper[4811]: E0219 10:09:25.582292 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:09:25 crc kubenswrapper[4811]: E0219 10:09:25.582397 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.582479 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:09:25 crc kubenswrapper[4811]: E0219 10:09:25.582707 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.605818 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.605886 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.605900 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.605927 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.605944 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:25Z","lastTransitionTime":"2026-02-19T10:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.708889 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.708947 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.708959 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.708985 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.708999 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:25Z","lastTransitionTime":"2026-02-19T10:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.811705 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.811764 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.811780 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.811804 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.811819 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:25Z","lastTransitionTime":"2026-02-19T10:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.915764 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.915851 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.915877 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.915904 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:25 crc kubenswrapper[4811]: I0219 10:09:25.915922 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:25Z","lastTransitionTime":"2026-02-19T10:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.019599 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.019649 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.019662 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.019684 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.019701 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:26Z","lastTransitionTime":"2026-02-19T10:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.122963 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.123081 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.123107 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.123138 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.123159 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:26Z","lastTransitionTime":"2026-02-19T10:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.225838 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.225880 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.225890 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.225906 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.225916 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:26Z","lastTransitionTime":"2026-02-19T10:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.329179 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.329216 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.329226 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.329243 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.329253 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:26Z","lastTransitionTime":"2026-02-19T10:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.385151 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.385226 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.385239 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.385264 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.385282 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:26Z","lastTransitionTime":"2026-02-19T10:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:26 crc kubenswrapper[4811]: E0219 10:09:26.402033 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aa043662-c579-48da-bd33-d995796f09b6\\\",\\\"systemUUID\\\":\\\"0ae76a21-a2ee-4264-8d70-e6dc0f503661\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:26Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.410424 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.410494 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.410508 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.410531 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.410550 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:26Z","lastTransitionTime":"2026-02-19T10:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:26 crc kubenswrapper[4811]: E0219 10:09:26.429783 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aa043662-c579-48da-bd33-d995796f09b6\\\",\\\"systemUUID\\\":\\\"0ae76a21-a2ee-4264-8d70-e6dc0f503661\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:26Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.434575 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.434624 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.434638 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.434657 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.434672 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:26Z","lastTransitionTime":"2026-02-19T10:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:26 crc kubenswrapper[4811]: E0219 10:09:26.448284 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aa043662-c579-48da-bd33-d995796f09b6\\\",\\\"systemUUID\\\":\\\"0ae76a21-a2ee-4264-8d70-e6dc0f503661\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:26Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.452946 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.452993 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.453005 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.453023 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.453035 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:26Z","lastTransitionTime":"2026-02-19T10:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:26 crc kubenswrapper[4811]: E0219 10:09:26.465806 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aa043662-c579-48da-bd33-d995796f09b6\\\",\\\"systemUUID\\\":\\\"0ae76a21-a2ee-4264-8d70-e6dc0f503661\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:26Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.469539 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.469587 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.469600 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.469623 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.469642 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:26Z","lastTransitionTime":"2026-02-19T10:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:26 crc kubenswrapper[4811]: E0219 10:09:26.481930 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aa043662-c579-48da-bd33-d995796f09b6\\\",\\\"systemUUID\\\":\\\"0ae76a21-a2ee-4264-8d70-e6dc0f503661\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:26Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:26 crc kubenswrapper[4811]: E0219 10:09:26.482099 4811 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.484122 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.484144 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.484155 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.484169 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.484178 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:26Z","lastTransitionTime":"2026-02-19T10:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.543014 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 05:43:33.33160007 +0000 UTC Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.582791 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:09:26 crc kubenswrapper[4811]: E0219 10:09:26.582948 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.587074 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.587127 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.587365 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.587391 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.587405 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:26Z","lastTransitionTime":"2026-02-19T10:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.650188 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35d4fd8e-224d-468e-b6e2-a292a6e1949f-metrics-certs\") pod \"network-metrics-daemon-n5z6r\" (UID: \"35d4fd8e-224d-468e-b6e2-a292a6e1949f\") " pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:09:26 crc kubenswrapper[4811]: E0219 10:09:26.650349 4811 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 10:09:26 crc kubenswrapper[4811]: E0219 10:09:26.650409 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35d4fd8e-224d-468e-b6e2-a292a6e1949f-metrics-certs podName:35d4fd8e-224d-468e-b6e2-a292a6e1949f nodeName:}" failed. No retries permitted until 2026-02-19 10:09:42.65038985 +0000 UTC m=+71.176854536 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/35d4fd8e-224d-468e-b6e2-a292a6e1949f-metrics-certs") pod "network-metrics-daemon-n5z6r" (UID: "35d4fd8e-224d-468e-b6e2-a292a6e1949f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.690340 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.690400 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.690410 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.690435 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.690462 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:26Z","lastTransitionTime":"2026-02-19T10:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.793180 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.793241 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.793254 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.793273 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.793285 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:26Z","lastTransitionTime":"2026-02-19T10:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.896111 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.896162 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.896174 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.896193 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.896207 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:26Z","lastTransitionTime":"2026-02-19T10:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.998074 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.998123 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.998135 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.998150 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:26 crc kubenswrapper[4811]: I0219 10:09:26.998159 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:26Z","lastTransitionTime":"2026-02-19T10:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.101067 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.101137 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.101163 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.101194 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.101219 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:27Z","lastTransitionTime":"2026-02-19T10:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.203435 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.203534 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.203546 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.203570 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.203588 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:27Z","lastTransitionTime":"2026-02-19T10:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.242335 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.257967 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c18ee97d5e19f635541477ce6c6eb5fe1417551582bb34c9e0ec77a134d660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:27Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.260294 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.277317 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:27Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.292140 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8ede5032c6812f2b019538d253d2c07cf1659f30fcc4c52b27e6a091fb0758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c005a321618306472457a36ad24c5550bd0d2320bd395d6c022a84a7fdebbb01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:27Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.306576 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.306637 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.306650 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.306727 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.306742 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:27Z","lastTransitionTime":"2026-02-19T10:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.308866 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e102b35eadbc15e015f695e0f25f51576d5a023de5326f37421223e711ac407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:27Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.324322 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e42c80-bece-4066-abad-05bc661d33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac23aaef2d47d83268a51b9efb514cc10a9fdb47d2304f5cabf31c2ae46abfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v97zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:27Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.339054 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:27Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.357844 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxbrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trc25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxbrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:27Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.373108 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sx75t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c742c05c-adc4-491d-994f-036af2fda249\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e49365be7e1db37aca235d55ae33418e89574caaa0ff6a67b0ee3a950e734220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f41e09b6058e874ace3c98bd66981467c7d16ba4df94700158d866e48ffb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sx75t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:27Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.388341 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:27Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.408166 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98110a48-08c0-4227-93dc-cfadaa45b044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec84e9992f678e60bd39671c08434036c0ebd6746e540b35e3b7afb638d1836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9s768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:27Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.409778 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.409810 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.409822 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.409844 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.409863 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:27Z","lastTransitionTime":"2026-02-19T10:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.437797 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"554fedad-397a-4827-b5ad-48a681a7a2e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://191613fba6a5b20856a4112533786ea84b04713389b23a1a6bedac4be281c5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://836e730a514249add09d2f93e7bf147a27367952c1716d7efe636bb7d8097961\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:07Z\\\",\\\"message\\\":\\\"7.922925 6244 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 10:09:07.923380 6244 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 10:09:07.923407 6244 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 10:09:07.923417 6244 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 10:09:07.923375 6244 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 10:09:07.923500 6244 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 10:09:07.923499 6244 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 10:09:07.923544 6244 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 10:09:07.923572 6244 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 10:09:07.923676 6244 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 10:09:07.923681 6244 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 10:09:07.923771 6244 factory.go:656] Stopping watch factory\\\\nI0219 10:09:07.923789 6244 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 10:09:07.923832 6244 ovnkube.go:599] Stopped ovnkube\\\\nI0219 10:09:07.923891 6244 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 10:09:07.924046 6244 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191613fba6a5b20856a4112533786ea84b04713389b23a1a6bedac4be281c5ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:20Z\\\",\\\"message\\\":\\\"3] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 10:09:20.139644 6457 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 10:09:20.139655 6457 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0219 10:09:20.139663 6457 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0219 10:09:20.139670 6457 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 10:09:20.139631 6457 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0219 10:09:20.139684 6457 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 10:09:20.139696 6457 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 398.819µs)\\\\nF0219 10:09:20.139608 6457 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInfo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ln785\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:27Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.449821 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n5z6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d4fd8e-224d-468e-b6e2-a292a6e1949f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b57g5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b57g5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n5z6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:27Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.463867 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:27Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.478431 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4b8138d1d6b88a57900c770e6553635cf6712d2a43061747bf2e1fe18f4a34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI0219 10:08:54.897941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0219 10:08:54.897951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0219 10:08:54.898073 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771495718\\\\\\\\\\\\\\\" (2026-02-19 10:08:37 +0000 UTC to 2026-03-21 10:08:38 +0000 UTC (now=2026-02-19 10:08:54.898015179 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898163 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\"\\\\nI0219 10:08:54.898747 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771495728\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771495728\\\\\\\\\\\\\\\" (2026-02-19 09:08:48 +0000 UTC to 2027-02-19 09:08:48 +0000 UTC (now=2026-02-19 10:08:54.898701245 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898777 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0219 10:08:54.898804 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0219 10:08:54.898823 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 10:08:54.900253 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900328 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900654 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0219 10:08:54.907320 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:27Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.488667 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3788443dd8b315ed6c0e0f94f9ec1524525bde8a1bfdab4096a4af6b17155977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:27Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.499896 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nw57r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5177375f-9a2c-4430-8c2e-307ddf187cb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd28eb1f2ce842c3a00d30faed2acaaf4d4406342e9dc7f88595be430d0423ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5phq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nw57r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:27Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.512961 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.513007 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.513018 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.513038 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.513050 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:27Z","lastTransitionTime":"2026-02-19T10:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.543973 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 02:46:40.789544791 +0000 UTC Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.582491 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.582588 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:09:27 crc kubenswrapper[4811]: E0219 10:09:27.582624 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.582689 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:09:27 crc kubenswrapper[4811]: E0219 10:09:27.582808 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:09:27 crc kubenswrapper[4811]: E0219 10:09:27.582882 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.615593 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.615641 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.615653 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.615688 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.615698 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:27Z","lastTransitionTime":"2026-02-19T10:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.661228 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:09:27 crc kubenswrapper[4811]: E0219 10:09:27.661393 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:09:59.661371247 +0000 UTC m=+88.187835933 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.661498 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.661757 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:09:27 crc kubenswrapper[4811]: E0219 10:09:27.661793 4811 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 10:09:27 crc kubenswrapper[4811]: E0219 10:09:27.661838 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 10:09:59.661827178 +0000 UTC m=+88.188291864 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 10:09:27 crc kubenswrapper[4811]: E0219 10:09:27.661912 4811 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 10:09:27 crc kubenswrapper[4811]: E0219 10:09:27.661961 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 10:09:59.661950221 +0000 UTC m=+88.188414907 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.718430 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.718520 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.718551 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.718577 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.718593 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:27Z","lastTransitionTime":"2026-02-19T10:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.763032 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.763081 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:09:27 crc kubenswrapper[4811]: E0219 10:09:27.763218 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 10:09:27 crc kubenswrapper[4811]: E0219 10:09:27.763235 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 10:09:27 crc kubenswrapper[4811]: E0219 10:09:27.763248 4811 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 10:09:27 crc kubenswrapper[4811]: E0219 10:09:27.763306 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 10:09:59.763289908 +0000 UTC m=+88.289754594 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 10:09:27 crc kubenswrapper[4811]: E0219 10:09:27.763420 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 10:09:27 crc kubenswrapper[4811]: E0219 10:09:27.763504 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 10:09:27 crc kubenswrapper[4811]: E0219 10:09:27.763524 4811 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 10:09:27 crc kubenswrapper[4811]: E0219 10:09:27.763662 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 10:09:59.763631256 +0000 UTC m=+88.290096072 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.821635 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.821673 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.821682 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.821695 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.821707 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:27Z","lastTransitionTime":"2026-02-19T10:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.925154 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.925209 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.925219 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.925234 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:27 crc kubenswrapper[4811]: I0219 10:09:27.925247 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:27Z","lastTransitionTime":"2026-02-19T10:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.028929 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.028968 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.028976 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.028989 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.028998 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:28Z","lastTransitionTime":"2026-02-19T10:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.132140 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.132187 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.132197 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.132219 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.132235 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:28Z","lastTransitionTime":"2026-02-19T10:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.234473 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.234738 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.234865 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.234982 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.235070 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:28Z","lastTransitionTime":"2026-02-19T10:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.337300 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.337657 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.337756 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.337865 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.337946 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:28Z","lastTransitionTime":"2026-02-19T10:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.440442 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.440512 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.440524 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.440542 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.440553 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:28Z","lastTransitionTime":"2026-02-19T10:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.543854 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.544310 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.544424 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 19:46:37.766588689 +0000 UTC Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.544552 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.544654 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.544681 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:28Z","lastTransitionTime":"2026-02-19T10:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.582779 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:09:28 crc kubenswrapper[4811]: E0219 10:09:28.582943 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.647441 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.647508 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.647520 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.647537 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.647551 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:28Z","lastTransitionTime":"2026-02-19T10:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.750420 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.750539 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.750562 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.750595 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.750620 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:28Z","lastTransitionTime":"2026-02-19T10:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.849788 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.854373 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.854420 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.854431 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.854467 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.854481 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:28Z","lastTransitionTime":"2026-02-19T10:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.867093 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4b8138d1d6b88a57900c770e6553635cf6712d2a43061747bf2e1fe18f4a34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI0219 10:08:54.897941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0219 10:08:54.897951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0219 10:08:54.898073 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771495718\\\\\\\\\\\\\\\" (2026-02-19 10:08:37 +0000 UTC to 2026-03-21 10:08:38 +0000 UTC (now=2026-02-19 10:08:54.898015179 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898163 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\"\\\\nI0219 10:08:54.898747 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771495728\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771495728\\\\\\\\\\\\\\\" (2026-02-19 09:08:48 +0000 UTC to 2027-02-19 09:08:48 +0000 UTC (now=2026-02-19 10:08:54.898701245 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898777 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0219 10:08:54.898804 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0219 10:08:54.898823 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 10:08:54.900253 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900328 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900654 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0219 10:08:54.907320 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:28Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.879428 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9baac849-65d8-4d8e-90cc-1a82538dc848\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820be18f53ce1e3de635aae2c235a7522e55358347ad227db441688e38eff7a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f98f3b7146dd32290f30820bdf650e17aa3147f6fa4aedce1a59301c70bc7070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b78db08599819faea15066929a2406d58aeb5abf2ec530238ac08f02cb937111\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da14e1ea932ad641416cf7d2d3702ac5fecbee71b67ed85ad5dc9e17d6ba067b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da14e1ea932ad641416cf7d2d3702ac5fecbee71b67ed85ad5dc9e17d6ba067b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:28Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.890811 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3788443dd8b315ed6c0e0f94f9ec1524525bde8a1bfdab4096a4af6b17155977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:28Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.904064 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nw57r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5177375f-9a2c-4430-8c2e-307ddf187cb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd28eb1f2ce842c3a00d30faed2acaaf4d4406342e9dc7f88595be430d0423ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5phq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nw57r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:28Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.921414 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:28Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.936923 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:28Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.954120 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8ede5032c6812f2b019538d253d2c07cf1659f30fcc4c52b27e6a091fb0758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c005a321618306472457a36ad24c5550bd0d2320bd395d6c022a84a7fdebbb01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:28Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.961387 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.961437 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.961468 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.961493 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.961511 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:28Z","lastTransitionTime":"2026-02-19T10:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.975900 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e102b35eadbc15e015f695e0f25f51576d5a023de5326f37421223e711ac407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:28Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:28 crc kubenswrapper[4811]: I0219 10:09:28.988540 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e42c80-bece-4066-abad-05bc661d33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac23aaef2d47d83268a51b9efb514cc10a9fdb47d2304f5cabf31c2ae46abfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v97zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:28Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.004235 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c18ee97d5e19f635541477ce6c6eb5fe1417551582bb34c9e0ec77a134d660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:29Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.020708 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:29Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.034113 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxbrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trc25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxbrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:29Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.046306 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sx75t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c742c05c-adc4-491d-994f-036af2fda249\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e49365be7e1db37aca235d55ae33418e89574caaa0ff6a67b0ee3a950e734220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f41e09b6058e874ace3c98bd66981467c7d16ba4df94700158d866e48ffb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sx75t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:29Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.058999 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:29Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.063578 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.063607 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.063617 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.063632 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.063642 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:29Z","lastTransitionTime":"2026-02-19T10:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.072416 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98110a48-08c0-4227-93dc-cfadaa45b044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec84e9992f678e60bd39671c08434036c0ebd6746e540b35e3b7afb638d1836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9s768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:29Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.090159 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"554fedad-397a-4827-b5ad-48a681a7a2e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://191613fba6a5b20856a4112533786ea84b04713389b23a1a6bedac4be281c5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://836e730a514249add09d2f93e7bf147a27367952c1716d7efe636bb7d8097961\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:07Z\\\",\\\"message\\\":\\\"7.922925 6244 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 10:09:07.923380 6244 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 10:09:07.923407 6244 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 10:09:07.923417 6244 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 10:09:07.923375 6244 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 10:09:07.923500 6244 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 10:09:07.923499 6244 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 10:09:07.923544 6244 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 10:09:07.923572 6244 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 10:09:07.923676 6244 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 10:09:07.923681 6244 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 10:09:07.923771 6244 factory.go:656] Stopping watch factory\\\\nI0219 10:09:07.923789 6244 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 10:09:07.923832 6244 ovnkube.go:599] Stopped ovnkube\\\\nI0219 10:09:07.923891 6244 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 10:09:07.924046 6244 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191613fba6a5b20856a4112533786ea84b04713389b23a1a6bedac4be281c5ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:20Z\\\",\\\"message\\\":\\\"3] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 10:09:20.139644 6457 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 10:09:20.139655 6457 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0219 10:09:20.139663 6457 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0219 10:09:20.139670 6457 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 10:09:20.139631 6457 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0219 10:09:20.139684 6457 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 10:09:20.139696 6457 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 398.819µs)\\\\nF0219 10:09:20.139608 6457 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInfo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ln785\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:29Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.100686 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n5z6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d4fd8e-224d-468e-b6e2-a292a6e1949f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b57g5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b57g5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n5z6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:29Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.166104 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.166144 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.166154 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.166172 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.166184 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:29Z","lastTransitionTime":"2026-02-19T10:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.268282 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.268325 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.268335 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.268351 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.268363 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:29Z","lastTransitionTime":"2026-02-19T10:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.371633 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.371678 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.371692 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.371711 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.371724 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:29Z","lastTransitionTime":"2026-02-19T10:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.474584 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.474639 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.474653 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.474673 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.474686 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:29Z","lastTransitionTime":"2026-02-19T10:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.545606 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 03:17:53.050113211 +0000 UTC Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.577826 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.577871 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.577883 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.577900 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.577918 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:29Z","lastTransitionTime":"2026-02-19T10:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.582316 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.582359 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.582337 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:09:29 crc kubenswrapper[4811]: E0219 10:09:29.582508 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:09:29 crc kubenswrapper[4811]: E0219 10:09:29.582681 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:09:29 crc kubenswrapper[4811]: E0219 10:09:29.582823 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.680301 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.680348 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.680357 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.680373 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.680382 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:29Z","lastTransitionTime":"2026-02-19T10:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.783379 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.783438 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.783502 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.783531 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.783549 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:29Z","lastTransitionTime":"2026-02-19T10:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.887005 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.887070 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.887086 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.887111 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.887130 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:29Z","lastTransitionTime":"2026-02-19T10:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.989600 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.989689 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.989701 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.989723 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:29 crc kubenswrapper[4811]: I0219 10:09:29.989733 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:29Z","lastTransitionTime":"2026-02-19T10:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.092420 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.092476 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.092488 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.092505 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.092518 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:30Z","lastTransitionTime":"2026-02-19T10:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.195334 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.195390 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.195411 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.195438 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.195490 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:30Z","lastTransitionTime":"2026-02-19T10:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.298418 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.298538 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.298559 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.298587 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.298605 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:30Z","lastTransitionTime":"2026-02-19T10:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.401728 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.401766 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.401777 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.401794 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.401808 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:30Z","lastTransitionTime":"2026-02-19T10:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.504944 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.505210 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.505277 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.505368 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.505522 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:30Z","lastTransitionTime":"2026-02-19T10:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.545986 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 12:14:48.992651493 +0000 UTC Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.582488 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:09:30 crc kubenswrapper[4811]: E0219 10:09:30.582959 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.608376 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.608413 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.608425 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.608442 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.608471 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:30Z","lastTransitionTime":"2026-02-19T10:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.711420 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.711508 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.711524 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.711541 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.711554 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:30Z","lastTransitionTime":"2026-02-19T10:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.814384 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.814418 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.814428 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.814468 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.814483 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:30Z","lastTransitionTime":"2026-02-19T10:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.917869 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.918660 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.918693 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.918722 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:30 crc kubenswrapper[4811]: I0219 10:09:30.918743 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:30Z","lastTransitionTime":"2026-02-19T10:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.021782 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.021855 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.021876 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.021899 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.021917 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:31Z","lastTransitionTime":"2026-02-19T10:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.124179 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.124216 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.124282 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.124298 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.124309 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:31Z","lastTransitionTime":"2026-02-19T10:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.226944 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.227003 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.227017 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.227037 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.227050 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:31Z","lastTransitionTime":"2026-02-19T10:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.329700 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.329739 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.329747 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.329764 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.329773 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:31Z","lastTransitionTime":"2026-02-19T10:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.433505 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.433549 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.433566 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.433590 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.433609 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:31Z","lastTransitionTime":"2026-02-19T10:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.536632 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.536670 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.536683 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.536699 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.536712 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:31Z","lastTransitionTime":"2026-02-19T10:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.546726 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 11:08:49.112621159 +0000 UTC Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.584379 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.584424 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.584514 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:09:31 crc kubenswrapper[4811]: E0219 10:09:31.584673 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:09:31 crc kubenswrapper[4811]: E0219 10:09:31.584966 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:09:31 crc kubenswrapper[4811]: E0219 10:09:31.585091 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.639363 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.639403 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.639415 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.639436 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.639471 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:31Z","lastTransitionTime":"2026-02-19T10:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.742069 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.742126 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.742137 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.742154 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.742166 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:31Z","lastTransitionTime":"2026-02-19T10:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.844822 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.844860 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.844869 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.844885 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.844895 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:31Z","lastTransitionTime":"2026-02-19T10:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.947904 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.947959 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.947970 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.947984 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:31 crc kubenswrapper[4811]: I0219 10:09:31.947997 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:31Z","lastTransitionTime":"2026-02-19T10:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.050818 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.050862 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.050874 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.050891 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.050902 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:32Z","lastTransitionTime":"2026-02-19T10:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.153121 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.153153 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.153161 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.153193 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.153203 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:32Z","lastTransitionTime":"2026-02-19T10:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.255928 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.255986 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.255996 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.256015 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.256027 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:32Z","lastTransitionTime":"2026-02-19T10:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.359009 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.359093 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.359110 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.359142 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.359161 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:32Z","lastTransitionTime":"2026-02-19T10:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.461627 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.461677 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.461687 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.461706 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.461717 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:32Z","lastTransitionTime":"2026-02-19T10:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.547198 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 06:25:56.192714641 +0000 UTC Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.564985 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.565021 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.565032 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.565050 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.565064 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:32Z","lastTransitionTime":"2026-02-19T10:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.583163 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:09:32 crc kubenswrapper[4811]: E0219 10:09:32.583388 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.599433 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:32Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.613057 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxbrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trc25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxbrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:32Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.624156 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sx75t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c742c05c-adc4-491d-994f-036af2fda249\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e49365be7e1db37aca235d55ae33418e89574caaa0ff6a67b0ee3a950e734220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f41e09b6058e874ace3c98bd66981467c7d16ba4df94700158d866e48ffb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sx75t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:32Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.637511 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:32Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.651504 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98110a48-08c0-4227-93dc-cfadaa45b044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec84e9992f678e60bd39671c08434036c0ebd6746e540b35e3b7afb638d1836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9s768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:32Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.666945 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.666987 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.667001 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.667020 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.667033 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:32Z","lastTransitionTime":"2026-02-19T10:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.670005 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"554fedad-397a-4827-b5ad-48a681a7a2e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://191613fba6a5b20856a4112533786ea84b04713389b23a1a6bedac4be281c5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://836e730a514249add09d2f93e7bf147a27367952c1716d7efe636bb7d8097961\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:07Z\\\",\\\"message\\\":\\\"7.922925 6244 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 10:09:07.923380 6244 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 10:09:07.923407 6244 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 10:09:07.923417 6244 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 10:09:07.923375 6244 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 10:09:07.923500 6244 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 10:09:07.923499 6244 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 10:09:07.923544 6244 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 10:09:07.923572 6244 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 10:09:07.923676 6244 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 10:09:07.923681 6244 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 10:09:07.923771 6244 factory.go:656] Stopping watch factory\\\\nI0219 10:09:07.923789 6244 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 10:09:07.923832 6244 ovnkube.go:599] Stopped ovnkube\\\\nI0219 10:09:07.923891 6244 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 10:09:07.924046 6244 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191613fba6a5b20856a4112533786ea84b04713389b23a1a6bedac4be281c5ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:20Z\\\",\\\"message\\\":\\\"3] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 10:09:20.139644 6457 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 10:09:20.139655 6457 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0219 10:09:20.139663 6457 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0219 10:09:20.139670 6457 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 10:09:20.139631 6457 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0219 10:09:20.139684 6457 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 10:09:20.139696 6457 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 398.819µs)\\\\nF0219 10:09:20.139608 6457 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInfo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ln785\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:32Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.681032 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n5z6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d4fd8e-224d-468e-b6e2-a292a6e1949f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b57g5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b57g5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n5z6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:32Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.694844 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:32Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.707751 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4b8138d1d6b88a57900c770e6553635cf6712d2a43061747bf2e1fe18f4a34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI0219 10:08:54.897941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0219 10:08:54.897951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0219 10:08:54.898073 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771495718\\\\\\\\\\\\\\\" (2026-02-19 10:08:37 +0000 UTC to 2026-03-21 10:08:38 +0000 UTC (now=2026-02-19 10:08:54.898015179 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898163 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\"\\\\nI0219 10:08:54.898747 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771495728\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771495728\\\\\\\\\\\\\\\" (2026-02-19 09:08:48 +0000 UTC to 2027-02-19 09:08:48 +0000 UTC (now=2026-02-19 10:08:54.898701245 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898777 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0219 10:08:54.898804 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0219 10:08:54.898823 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 10:08:54.900253 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900328 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900654 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0219 10:08:54.907320 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:32Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.719351 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9baac849-65d8-4d8e-90cc-1a82538dc848\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820be18f53ce1e3de635aae2c235a7522e55358347ad227db441688e38eff7a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f98f3b7146dd32290f30820bdf650e17aa3147f6fa4aedce1a59301c70bc7070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b78db08599819faea15066929a2406d58aeb5abf2ec530238ac08f02cb937111\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da14e1ea932ad641416cf7d2d3702ac5fecbee71b67ed85ad5dc9e17d6ba067b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da14e1ea932ad641416cf7d2d3702ac5fecbee71b67ed85ad5dc9e17d6ba067b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:32Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.729341 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3788443dd8b315ed6c0e0f94f9ec1524525bde8a1bfdab4096a4af6b17155977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:32Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.738054 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nw57r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5177375f-9a2c-4430-8c2e-307ddf187cb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd28eb1f2ce842c3a00d30faed2acaaf4d4406342e9dc7f88595be430d0423ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5phq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nw57r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:32Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.750584 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c18ee97d5e19f635541477ce6c6eb5fe1417551582bb34c9e0ec77a134d660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:32Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.761993 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:32Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.770132 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.770163 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.770174 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.770192 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.770203 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:32Z","lastTransitionTime":"2026-02-19T10:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.773713 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8ede5032c6812f2b019538d253d2c07cf1659f30fcc4c52b27e6a091fb0758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c005a321618306472457a36ad24c5550bd0d2320bd395d6c022a84a7fdebbb01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:32Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.784144 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e102b35eadbc15e015f695e0f25f51576d5a023de5326f37421223e711ac407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:32Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.795837 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e42c80-bece-4066-abad-05bc661d33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac23aaef2d47d83268a51b9efb514cc10a9fdb47d2304f5cabf31c2ae46abfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v97zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:32Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.872943 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.872976 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.872985 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.873000 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.873009 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:32Z","lastTransitionTime":"2026-02-19T10:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.975952 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.975988 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.975998 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.976014 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:32 crc kubenswrapper[4811]: I0219 10:09:32.976026 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:32Z","lastTransitionTime":"2026-02-19T10:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.079201 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.079274 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.079291 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.079313 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.079331 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:33Z","lastTransitionTime":"2026-02-19T10:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.182559 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.182590 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.182599 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.182612 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.182622 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:33Z","lastTransitionTime":"2026-02-19T10:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.285571 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.285611 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.285623 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.285640 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.285651 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:33Z","lastTransitionTime":"2026-02-19T10:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.388249 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.388287 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.388299 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.388315 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.388327 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:33Z","lastTransitionTime":"2026-02-19T10:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.490848 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.490892 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.490909 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.490935 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.490959 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:33Z","lastTransitionTime":"2026-02-19T10:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.547526 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 02:20:40.973774877 +0000 UTC Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.582820 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.582861 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.582946 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:09:33 crc kubenswrapper[4811]: E0219 10:09:33.582956 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:09:33 crc kubenswrapper[4811]: E0219 10:09:33.583119 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:09:33 crc kubenswrapper[4811]: E0219 10:09:33.583182 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.583783 4811 scope.go:117] "RemoveContainer" containerID="191613fba6a5b20856a4112533786ea84b04713389b23a1a6bedac4be281c5ba" Feb 19 10:09:33 crc kubenswrapper[4811]: E0219 10:09:33.584033 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ln785_openshift-ovn-kubernetes(554fedad-397a-4827-b5ad-48a681a7a2e5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.593243 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.593291 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.593301 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.593315 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.593327 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:33Z","lastTransitionTime":"2026-02-19T10:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.603161 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxbrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trc25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxbrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:33Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.620763 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sx75t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c742c05c-adc4-491d-994f-036af2fda249\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e49365be7e1db37aca235d55ae33418e89574caaa0ff6a67b0ee3a950e734220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f41e09b6058e874ace3c98bd66981467c7d16ba4df94700158d866e48ffb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sx75t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:33Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.637601 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:33Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.655308 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98110a48-08c0-4227-93dc-cfadaa45b044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec84e9992f678e60bd39671c08434036c0ebd6746e540b35e3b7afb638d1836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9s768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:33Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.673518 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"554fedad-397a-4827-b5ad-48a681a7a2e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://191613fba6a5b20856a4112533786ea84b04713389b23a1a6bedac4be281c5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191613fba6a5b20856a4112533786ea84b04713389b23a1a6bedac4be281c5ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:20Z\\\",\\\"message\\\":\\\"3] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 10:09:20.139644 6457 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 10:09:20.139655 6457 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0219 10:09:20.139663 6457 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0219 10:09:20.139670 6457 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 10:09:20.139631 6457 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0219 10:09:20.139684 6457 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 10:09:20.139696 6457 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 398.819µs)\\\\nF0219 10:09:20.139608 6457 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInfo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ln785_openshift-ovn-kubernetes(554fedad-397a-4827-b5ad-48a681a7a2e5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ln785\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:33Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.688986 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n5z6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d4fd8e-224d-468e-b6e2-a292a6e1949f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b57g5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b57g5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n5z6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:33Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.696287 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.696331 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.696344 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.696362 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.696373 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:33Z","lastTransitionTime":"2026-02-19T10:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.709374 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:33Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.733667 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9baac849-65d8-4d8e-90cc-1a82538dc848\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820be18f53ce1e3de635aae2c235a7522e55358347ad227db441688e38eff7a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f98f3b7146dd32290f30820bdf650e17aa3147f6fa4aedce1a59301c70bc7070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b78db08599819faea15066929a2406d58aeb5abf2ec530238ac08f02cb937111\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da14e1ea932ad641416cf7d2d3702ac5fecbee71b67ed85ad5dc9e17d6ba067b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da14e1ea932ad641416cf7d2d3702ac5fecbee71b67ed85ad5dc9e17d6ba067b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:33Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.747847 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3788443dd8b315ed6c0e0f94f9ec1524525bde8a1bfdab4096a4af6b17155977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:33Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.760805 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nw57r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5177375f-9a2c-4430-8c2e-307ddf187cb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd28eb1f2ce842c3a00d30faed2acaaf4d4406342e9dc7f88595be430d0423ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5phq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nw57r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:33Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.781392 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:33Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.799405 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.799509 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.799535 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.799561 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.799581 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:33Z","lastTransitionTime":"2026-02-19T10:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.801784 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4b8138d1d6b88a57900c770e6553635cf6712d2a43061747bf2e1fe18f4a34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI0219 10:08:54.897941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0219 10:08:54.897951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0219 10:08:54.898073 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771495718\\\\\\\\\\\\\\\" (2026-02-19 10:08:37 +0000 UTC to 2026-03-21 10:08:38 +0000 UTC (now=2026-02-19 10:08:54.898015179 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898163 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\"\\\\nI0219 10:08:54.898747 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771495728\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771495728\\\\\\\\\\\\\\\" (2026-02-19 09:08:48 +0000 UTC to 2027-02-19 09:08:48 +0000 UTC (now=2026-02-19 10:08:54.898701245 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898777 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0219 10:08:54.898804 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0219 10:08:54.898823 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 10:08:54.900253 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900328 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900654 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0219 10:08:54.907320 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:33Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.816188 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8ede5032c6812f2b019538d253d2c07cf1659f30fcc4c52b27e6a091fb0758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c005a321618306472457a36ad24c5550bd0d2320bd395d6c022a84a7fdebbb01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:33Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.841574 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e102b35eadbc15e015f695e0f25f51576d5a023de5326f37421223e711ac407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:33Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.854535 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e42c80-bece-4066-abad-05bc661d33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac23aaef2d47d83268a51b9efb514cc10a9fdb47d2304f5cabf31c2ae46abfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v97zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:33Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.866515 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c18ee97d5e19f635541477ce6c6eb5fe1417551582bb34c9e0ec77a134d660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:33Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.880694 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:33Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.903007 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.903038 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.903049 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.903062 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:33 crc kubenswrapper[4811]: I0219 10:09:33.903072 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:33Z","lastTransitionTime":"2026-02-19T10:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.005352 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.005399 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.005409 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.005483 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.005497 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:34Z","lastTransitionTime":"2026-02-19T10:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.108144 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.108190 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.108202 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.108242 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.108253 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:34Z","lastTransitionTime":"2026-02-19T10:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.214003 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.214056 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.214065 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.214079 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.214091 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:34Z","lastTransitionTime":"2026-02-19T10:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.316837 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.316899 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.316909 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.316923 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.316933 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:34Z","lastTransitionTime":"2026-02-19T10:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.419709 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.419764 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.419778 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.419800 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.419812 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:34Z","lastTransitionTime":"2026-02-19T10:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.526638 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.526681 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.526698 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.526716 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.526727 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:34Z","lastTransitionTime":"2026-02-19T10:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.548101 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 10:27:15.423440199 +0000 UTC Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.582752 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:09:34 crc kubenswrapper[4811]: E0219 10:09:34.582983 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.630033 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.630392 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.630479 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.630552 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.630676 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:34Z","lastTransitionTime":"2026-02-19T10:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.734118 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.734164 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.734173 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.734191 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.734202 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:34Z","lastTransitionTime":"2026-02-19T10:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.836186 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.836226 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.836241 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.836259 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.836271 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:34Z","lastTransitionTime":"2026-02-19T10:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.939502 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.939557 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.939572 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.939640 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:34 crc kubenswrapper[4811]: I0219 10:09:34.939658 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:34Z","lastTransitionTime":"2026-02-19T10:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.042872 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.042932 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.042947 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.042970 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.042988 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:35Z","lastTransitionTime":"2026-02-19T10:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.147018 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.147154 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.147299 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.147327 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.147351 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:35Z","lastTransitionTime":"2026-02-19T10:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.250309 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.250369 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.250380 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.250403 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.250417 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:35Z","lastTransitionTime":"2026-02-19T10:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.353639 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.353705 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.353719 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.353738 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.353755 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:35Z","lastTransitionTime":"2026-02-19T10:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.457261 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.457351 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.457364 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.457385 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.457399 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:35Z","lastTransitionTime":"2026-02-19T10:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.548715 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 08:09:35.911604382 +0000 UTC Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.560687 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.560745 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.560764 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.560792 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.560807 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:35Z","lastTransitionTime":"2026-02-19T10:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.582124 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.582218 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.582160 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:09:35 crc kubenswrapper[4811]: E0219 10:09:35.582306 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:09:35 crc kubenswrapper[4811]: E0219 10:09:35.582491 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:09:35 crc kubenswrapper[4811]: E0219 10:09:35.582581 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.664503 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.664559 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.664571 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.664591 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.664607 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:35Z","lastTransitionTime":"2026-02-19T10:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.767707 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.767778 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.767794 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.767827 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.767845 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:35Z","lastTransitionTime":"2026-02-19T10:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.870753 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.870792 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.870805 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.870823 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.870835 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:35Z","lastTransitionTime":"2026-02-19T10:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.973391 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.973487 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.973508 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.973532 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:35 crc kubenswrapper[4811]: I0219 10:09:35.973548 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:35Z","lastTransitionTime":"2026-02-19T10:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.076386 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.076426 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.076435 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.076468 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.076479 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:36Z","lastTransitionTime":"2026-02-19T10:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.179263 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.179323 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.179338 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.179360 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.179377 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:36Z","lastTransitionTime":"2026-02-19T10:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.281827 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.281893 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.281905 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.281926 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.281939 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:36Z","lastTransitionTime":"2026-02-19T10:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.385998 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.386057 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.386067 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.386095 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.386108 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:36Z","lastTransitionTime":"2026-02-19T10:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.489075 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.489137 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.489147 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.489167 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.489188 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:36Z","lastTransitionTime":"2026-02-19T10:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.548872 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 06:26:43.157892762 +0000 UTC Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.582743 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:09:36 crc kubenswrapper[4811]: E0219 10:09:36.582909 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.591229 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.591274 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.591283 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.591298 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.591307 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:36Z","lastTransitionTime":"2026-02-19T10:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.694896 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.694967 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.694980 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.694997 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.695009 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:36Z","lastTransitionTime":"2026-02-19T10:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.780146 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.780200 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.780210 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.780229 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.780240 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:36Z","lastTransitionTime":"2026-02-19T10:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:36 crc kubenswrapper[4811]: E0219 10:09:36.795528 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aa043662-c579-48da-bd33-d995796f09b6\\\",\\\"systemUUID\\\":\\\"0ae76a21-a2ee-4264-8d70-e6dc0f503661\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:36Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.799248 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.799301 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.799311 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.799333 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.799345 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:36Z","lastTransitionTime":"2026-02-19T10:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:36 crc kubenswrapper[4811]: E0219 10:09:36.814131 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aa043662-c579-48da-bd33-d995796f09b6\\\",\\\"systemUUID\\\":\\\"0ae76a21-a2ee-4264-8d70-e6dc0f503661\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:36Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.818744 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.818804 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.818818 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.818844 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.818861 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:36Z","lastTransitionTime":"2026-02-19T10:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:36 crc kubenswrapper[4811]: E0219 10:09:36.833913 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aa043662-c579-48da-bd33-d995796f09b6\\\",\\\"systemUUID\\\":\\\"0ae76a21-a2ee-4264-8d70-e6dc0f503661\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:36Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.837931 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.837968 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.837977 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.838008 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.838021 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:36Z","lastTransitionTime":"2026-02-19T10:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:36 crc kubenswrapper[4811]: E0219 10:09:36.851960 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aa043662-c579-48da-bd33-d995796f09b6\\\",\\\"systemUUID\\\":\\\"0ae76a21-a2ee-4264-8d70-e6dc0f503661\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:36Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.856077 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.856119 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.856130 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.856149 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.856162 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:36Z","lastTransitionTime":"2026-02-19T10:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:36 crc kubenswrapper[4811]: E0219 10:09:36.869833 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aa043662-c579-48da-bd33-d995796f09b6\\\",\\\"systemUUID\\\":\\\"0ae76a21-a2ee-4264-8d70-e6dc0f503661\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:36Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:36 crc kubenswrapper[4811]: E0219 10:09:36.870060 4811 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.871929 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.871964 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.871985 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.872005 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.872018 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:36Z","lastTransitionTime":"2026-02-19T10:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.975033 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.975096 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.975112 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.975135 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:36 crc kubenswrapper[4811]: I0219 10:09:36.975149 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:36Z","lastTransitionTime":"2026-02-19T10:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.077131 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.077170 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.077182 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.077199 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.077211 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:37Z","lastTransitionTime":"2026-02-19T10:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.179712 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.179759 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.179768 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.179782 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.179793 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:37Z","lastTransitionTime":"2026-02-19T10:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.282908 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.282951 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.282963 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.282984 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.282997 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:37Z","lastTransitionTime":"2026-02-19T10:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.386040 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.386088 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.386103 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.386126 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.386140 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:37Z","lastTransitionTime":"2026-02-19T10:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.489084 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.489132 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.489145 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.489166 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.489180 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:37Z","lastTransitionTime":"2026-02-19T10:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.549001 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 23:00:11.655606609 +0000 UTC Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.582245 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.582294 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.582338 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:09:37 crc kubenswrapper[4811]: E0219 10:09:37.583319 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:09:37 crc kubenswrapper[4811]: E0219 10:09:37.583551 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:09:37 crc kubenswrapper[4811]: E0219 10:09:37.583778 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.592093 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.592148 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.592163 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.592189 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.592205 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:37Z","lastTransitionTime":"2026-02-19T10:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.695826 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.695876 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.695893 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.695920 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.695933 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:37Z","lastTransitionTime":"2026-02-19T10:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.799002 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.799049 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.799063 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.799086 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.799097 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:37Z","lastTransitionTime":"2026-02-19T10:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.901543 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.901584 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.901601 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.901625 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:37 crc kubenswrapper[4811]: I0219 10:09:37.901641 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:37Z","lastTransitionTime":"2026-02-19T10:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.004615 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.004665 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.004676 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.004695 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.004705 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:38Z","lastTransitionTime":"2026-02-19T10:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.107792 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.107854 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.107867 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.107889 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.107903 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:38Z","lastTransitionTime":"2026-02-19T10:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.210221 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.210261 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.210272 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.210288 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.210299 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:38Z","lastTransitionTime":"2026-02-19T10:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.313953 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.314018 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.314030 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.314052 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.314065 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:38Z","lastTransitionTime":"2026-02-19T10:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.417595 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.417648 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.417665 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.417685 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.417698 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:38Z","lastTransitionTime":"2026-02-19T10:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.520804 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.520874 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.520887 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.520911 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.520932 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:38Z","lastTransitionTime":"2026-02-19T10:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.550178 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 15:56:28.735964281 +0000 UTC Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.583054 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:09:38 crc kubenswrapper[4811]: E0219 10:09:38.583252 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.623996 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.624049 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.624060 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.624085 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.624100 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:38Z","lastTransitionTime":"2026-02-19T10:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.727091 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.727170 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.727191 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.727214 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.727231 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:38Z","lastTransitionTime":"2026-02-19T10:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.837126 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.837184 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.837195 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.837215 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.837227 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:38Z","lastTransitionTime":"2026-02-19T10:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.940652 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.940724 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.940739 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.940764 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:38 crc kubenswrapper[4811]: I0219 10:09:38.940781 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:38Z","lastTransitionTime":"2026-02-19T10:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.043891 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.043939 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.043950 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.043965 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.043977 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:39Z","lastTransitionTime":"2026-02-19T10:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.146364 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.146413 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.146428 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.146473 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.146486 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:39Z","lastTransitionTime":"2026-02-19T10:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.249649 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.249710 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.249722 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.249746 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.249763 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:39Z","lastTransitionTime":"2026-02-19T10:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.353093 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.353149 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.353160 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.353180 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.353197 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:39Z","lastTransitionTime":"2026-02-19T10:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.455938 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.455993 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.456008 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.456028 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.456041 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:39Z","lastTransitionTime":"2026-02-19T10:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.550622 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 21:36:49.224099933 +0000 UTC Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.559680 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.559727 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.559738 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.559762 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.559776 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:39Z","lastTransitionTime":"2026-02-19T10:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.582305 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.582339 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:09:39 crc kubenswrapper[4811]: E0219 10:09:39.582558 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.582438 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:09:39 crc kubenswrapper[4811]: E0219 10:09:39.582641 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:09:39 crc kubenswrapper[4811]: E0219 10:09:39.582810 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.663323 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.663375 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.663384 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.663400 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.663410 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:39Z","lastTransitionTime":"2026-02-19T10:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.765706 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.765766 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.765780 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.765794 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.765802 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:39Z","lastTransitionTime":"2026-02-19T10:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.867960 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.868022 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.868035 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.868051 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.868061 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:39Z","lastTransitionTime":"2026-02-19T10:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.970631 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.970713 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.970729 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.970745 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:39 crc kubenswrapper[4811]: I0219 10:09:39.970755 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:39Z","lastTransitionTime":"2026-02-19T10:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.073908 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.073957 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.073971 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.073990 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.074002 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:40Z","lastTransitionTime":"2026-02-19T10:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.176582 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.176635 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.176646 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.176667 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.176676 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:40Z","lastTransitionTime":"2026-02-19T10:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.279271 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.279311 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.279320 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.279338 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.279347 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:40Z","lastTransitionTime":"2026-02-19T10:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.382504 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.382558 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.382574 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.382595 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.382608 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:40Z","lastTransitionTime":"2026-02-19T10:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.485531 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.485581 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.485592 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.485611 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.485630 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:40Z","lastTransitionTime":"2026-02-19T10:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.551645 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 16:46:52.922433293 +0000 UTC Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.583143 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:09:40 crc kubenswrapper[4811]: E0219 10:09:40.583330 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.587678 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.587720 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.587732 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.587751 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.587762 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:40Z","lastTransitionTime":"2026-02-19T10:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.689742 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.689788 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.689796 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.689810 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.689819 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:40Z","lastTransitionTime":"2026-02-19T10:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.792932 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.793008 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.793021 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.793068 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.793080 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:40Z","lastTransitionTime":"2026-02-19T10:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.896094 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.896147 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.896158 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.896179 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.896191 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:40Z","lastTransitionTime":"2026-02-19T10:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.999147 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.999196 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.999207 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.999225 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:40 crc kubenswrapper[4811]: I0219 10:09:40.999239 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:40Z","lastTransitionTime":"2026-02-19T10:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.102392 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.102538 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.102555 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.102577 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.102589 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:41Z","lastTransitionTime":"2026-02-19T10:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.205977 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.206034 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.206052 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.206080 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.206096 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:41Z","lastTransitionTime":"2026-02-19T10:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.308814 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.308856 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.308866 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.308880 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.308897 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:41Z","lastTransitionTime":"2026-02-19T10:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.411493 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.411545 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.411555 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.411579 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.411598 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:41Z","lastTransitionTime":"2026-02-19T10:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.514194 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.514237 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.514249 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.514268 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.514281 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:41Z","lastTransitionTime":"2026-02-19T10:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.552174 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 04:24:03.196839298 +0000 UTC Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.582886 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.582953 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.583013 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:09:41 crc kubenswrapper[4811]: E0219 10:09:41.583087 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:09:41 crc kubenswrapper[4811]: E0219 10:09:41.583261 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:09:41 crc kubenswrapper[4811]: E0219 10:09:41.583469 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.616601 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.616645 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.616660 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.616677 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.616689 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:41Z","lastTransitionTime":"2026-02-19T10:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.719768 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.719847 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.719867 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.719894 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.719912 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:41Z","lastTransitionTime":"2026-02-19T10:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.822218 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.822268 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.822282 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.822298 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.822309 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:41Z","lastTransitionTime":"2026-02-19T10:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.925051 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.925098 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.925109 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.925123 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:41 crc kubenswrapper[4811]: I0219 10:09:41.925134 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:41Z","lastTransitionTime":"2026-02-19T10:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.028088 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.028139 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.028153 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.028174 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.028189 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:42Z","lastTransitionTime":"2026-02-19T10:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.130766 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.130882 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.130901 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.130991 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.131052 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:42Z","lastTransitionTime":"2026-02-19T10:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.233859 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.233904 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.233913 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.233930 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.233940 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:42Z","lastTransitionTime":"2026-02-19T10:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.337098 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.337173 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.337186 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.337214 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.337233 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:42Z","lastTransitionTime":"2026-02-19T10:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.440611 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.440649 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.440657 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.440672 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.440682 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:42Z","lastTransitionTime":"2026-02-19T10:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.542777 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.542856 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.542869 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.542892 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.542906 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:42Z","lastTransitionTime":"2026-02-19T10:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.553407 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 22:05:20.931217532 +0000 UTC Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.583022 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:09:42 crc kubenswrapper[4811]: E0219 10:09:42.583249 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.605913 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"554fedad-397a-4827-b5ad-48a681a7a2e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://191613fba6a5b20856a4112533786ea84b04713389b23a1a6bedac4be281c5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191613fba6a5b20856a4112533786ea84b04713389b23a1a6bedac4be281c5ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:20Z\\\",\\\"message\\\":\\\"3] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 10:09:20.139644 6457 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 10:09:20.139655 6457 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0219 10:09:20.139663 6457 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0219 10:09:20.139670 6457 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 10:09:20.139631 6457 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0219 10:09:20.139684 6457 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 10:09:20.139696 6457 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 398.819µs)\\\\nF0219 10:09:20.139608 6457 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInfo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ln785_openshift-ovn-kubernetes(554fedad-397a-4827-b5ad-48a681a7a2e5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ln785\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:42Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.621934 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n5z6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d4fd8e-224d-468e-b6e2-a292a6e1949f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b57g5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b57g5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n5z6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:42Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.638105 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:42Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.645383 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.645434 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.645461 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.645486 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.645497 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:42Z","lastTransitionTime":"2026-02-19T10:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.654843 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98110a48-08c0-4227-93dc-cfadaa45b044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec84e9992f678e60bd39671c08434036c0ebd6746e540b35e3b7afb638d1836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9s768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:42Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.667031 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3788443dd8b315ed6c0e0f94f9ec1524525bde8a1bfdab4096a4af6b17155977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:42Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.679502 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nw57r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5177375f-9a2c-4430-8c2e-307ddf187cb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd28eb1f2ce842c3a00d30faed2acaaf4d4406342e9dc7f88595be430d0423ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5phq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nw57r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:42Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.697524 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:42Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.715297 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4b8138d1d6b88a57900c770e6553635cf6712d2a43061747bf2e1fe18f4a34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI0219 10:08:54.897941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0219 10:08:54.897951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0219 10:08:54.898073 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771495718\\\\\\\\\\\\\\\" (2026-02-19 10:08:37 +0000 UTC to 2026-03-21 10:08:38 +0000 UTC (now=2026-02-19 10:08:54.898015179 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898163 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\"\\\\nI0219 10:08:54.898747 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771495728\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771495728\\\\\\\\\\\\\\\" (2026-02-19 09:08:48 +0000 UTC to 2027-02-19 09:08:48 +0000 UTC (now=2026-02-19 10:08:54.898701245 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898777 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0219 10:08:54.898804 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0219 10:08:54.898823 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 10:08:54.900253 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900328 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900654 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0219 10:08:54.907320 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:42Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.729486 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9baac849-65d8-4d8e-90cc-1a82538dc848\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820be18f53ce1e3de635aae2c235a7522e55358347ad227db441688e38eff7a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f98f3b7146dd32290f30820bdf650e17aa3147f6fa4aedce1a59301c70bc7070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b78db08599819faea15066929a2406d58aeb5abf2ec530238ac08f02cb937111\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da14e1ea932ad641416cf7d2d3702ac5fecbee71b67ed85ad5dc9e17d6ba067b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da14e1ea932ad641416cf7d2d3702ac5fecbee71b67ed85ad5dc9e17d6ba067b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:42Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.732172 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35d4fd8e-224d-468e-b6e2-a292a6e1949f-metrics-certs\") pod \"network-metrics-daemon-n5z6r\" (UID: \"35d4fd8e-224d-468e-b6e2-a292a6e1949f\") " pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:09:42 crc kubenswrapper[4811]: E0219 10:09:42.732347 4811 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 10:09:42 crc kubenswrapper[4811]: E0219 10:09:42.732417 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35d4fd8e-224d-468e-b6e2-a292a6e1949f-metrics-certs podName:35d4fd8e-224d-468e-b6e2-a292a6e1949f nodeName:}" failed. No retries permitted until 2026-02-19 10:10:14.732399104 +0000 UTC m=+103.258863800 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/35d4fd8e-224d-468e-b6e2-a292a6e1949f-metrics-certs") pod "network-metrics-daemon-n5z6r" (UID: "35d4fd8e-224d-468e-b6e2-a292a6e1949f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.746379 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e102b35eadbc15e015f695e0f25f51576d5a023de5326f37421223e711ac407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:42Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.749045 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.749100 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.749111 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.749130 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.749143 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:42Z","lastTransitionTime":"2026-02-19T10:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.761604 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e42c80-bece-4066-abad-05bc661d33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac23aaef2d47d83268a51b9efb514cc10a9fdb47d2304f5cabf31c2ae46abfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v97zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:42Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.780993 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c18ee97d5e19f635541477ce6c6eb5fe1417551582bb34c9e0ec77a134d660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:42Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.796266 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:42Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.812846 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8ede5032c6812f2b019538d253d2c07cf1659f30fcc4c52b27e6a091fb0758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c005a321618306472457a36ad24c5550bd0d2320bd395d6c022a84a7fdebbb01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:42Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.829779 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sx75t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c742c05c-adc4-491d-994f-036af2fda249\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e49365be7e1db37aca235d55ae33418e89574caaa0ff6a67b0ee3a950e734220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f41e09b6058e874ace3c98bd66981467c7d16ba4df94700158d866e48ffb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sx75t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:42Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.846532 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:42Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.851055 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.851092 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.851104 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.851123 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.851135 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:42Z","lastTransitionTime":"2026-02-19T10:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.863858 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxbrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trc25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxbrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:42Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.953562 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.953613 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.953666 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.953687 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:42 crc kubenswrapper[4811]: I0219 10:09:42.953700 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:42Z","lastTransitionTime":"2026-02-19T10:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.056685 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.056720 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.056729 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.056742 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.056751 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:43Z","lastTransitionTime":"2026-02-19T10:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.160084 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.160150 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.160173 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.160203 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.160225 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:43Z","lastTransitionTime":"2026-02-19T10:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.262963 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.263032 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.263047 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.263080 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.263097 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:43Z","lastTransitionTime":"2026-02-19T10:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.366308 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.366354 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.366363 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.366383 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.366394 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:43Z","lastTransitionTime":"2026-02-19T10:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.469719 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.469762 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.469775 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.469795 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.469807 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:43Z","lastTransitionTime":"2026-02-19T10:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.553928 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 18:02:14.633887954 +0000 UTC Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.572276 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.572341 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.572355 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.572376 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.572389 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:43Z","lastTransitionTime":"2026-02-19T10:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.582693 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.582759 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.582756 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:09:43 crc kubenswrapper[4811]: E0219 10:09:43.582862 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:09:43 crc kubenswrapper[4811]: E0219 10:09:43.582939 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:09:43 crc kubenswrapper[4811]: E0219 10:09:43.583036 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.675704 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.675761 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.675776 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.675799 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.675814 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:43Z","lastTransitionTime":"2026-02-19T10:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.779080 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.779142 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.779161 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.779185 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.779213 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:43Z","lastTransitionTime":"2026-02-19T10:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.881471 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.881505 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.881516 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.881532 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.881542 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:43Z","lastTransitionTime":"2026-02-19T10:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.984787 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.984834 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.984848 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.984869 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:43 crc kubenswrapper[4811]: I0219 10:09:43.984881 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:43Z","lastTransitionTime":"2026-02-19T10:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.088112 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.088159 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.088171 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.088191 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.088204 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:44Z","lastTransitionTime":"2026-02-19T10:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.191777 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.191831 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.191846 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.191868 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.191885 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:44Z","lastTransitionTime":"2026-02-19T10:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.294693 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.294731 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.294739 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.294752 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.294763 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:44Z","lastTransitionTime":"2026-02-19T10:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.398528 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.398588 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.398603 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.398629 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.398646 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:44Z","lastTransitionTime":"2026-02-19T10:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.501624 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.501671 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.501681 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.501696 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.501704 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:44Z","lastTransitionTime":"2026-02-19T10:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.554976 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 18:14:41.932975122 +0000 UTC Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.582637 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:09:44 crc kubenswrapper[4811]: E0219 10:09:44.582852 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.604467 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.604507 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.604517 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.604534 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.604547 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:44Z","lastTransitionTime":"2026-02-19T10:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.707395 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.707476 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.707489 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.707555 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.707575 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:44Z","lastTransitionTime":"2026-02-19T10:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.810984 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.811163 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.811210 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.811249 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.811271 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:44Z","lastTransitionTime":"2026-02-19T10:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.914281 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.914336 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.914350 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.914371 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:44 crc kubenswrapper[4811]: I0219 10:09:44.914385 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:44Z","lastTransitionTime":"2026-02-19T10:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.017405 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.017496 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.017509 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.017535 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.017549 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:45Z","lastTransitionTime":"2026-02-19T10:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.120516 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.120564 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.120581 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.120598 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.120612 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:45Z","lastTransitionTime":"2026-02-19T10:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.224087 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.224153 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.224165 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.224188 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.224204 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:45Z","lastTransitionTime":"2026-02-19T10:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.233372 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lxbrg_9766aa7e-a66b-4efc-a5de-2fe49ef00eb8/kube-multus/0.log" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.233433 4811 generic.go:334] "Generic (PLEG): container finished" podID="9766aa7e-a66b-4efc-a5de-2fe49ef00eb8" containerID="03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7" exitCode=1 Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.233486 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lxbrg" event={"ID":"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8","Type":"ContainerDied","Data":"03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7"} Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.233944 4811 scope.go:117] "RemoveContainer" containerID="03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.251134 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:45Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.266113 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxbrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:44Z\\\",\\\"message\\\":\\\"2026-02-19T10:08:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d1e4d1f1-73f8-4a77-a08a-dbfba1595670\\\\n2026-02-19T10:08:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d1e4d1f1-73f8-4a77-a08a-dbfba1595670 to /host/opt/cni/bin/\\\\n2026-02-19T10:08:59Z [verbose] multus-daemon started\\\\n2026-02-19T10:08:59Z [verbose] Readiness Indicator file check\\\\n2026-02-19T10:09:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trc25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxbrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:45Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.279034 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sx75t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c742c05c-adc4-491d-994f-036af2fda249\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e49365be7e1db37aca235d55ae33418e89574caaa0ff6a67b0ee3a950e734220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f41e09b6058e874ace3c98bd66981467c7d16ba4df94700158d866e48ffb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sx75t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:45Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.292769 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:45Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.306563 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98110a48-08c0-4227-93dc-cfadaa45b044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec84e9992f678e60bd39671c08434036c0ebd6746e540b35e3b7afb638d1836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9s768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:45Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.326219 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"554fedad-397a-4827-b5ad-48a681a7a2e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://191613fba6a5b20856a4112533786ea84b04713389b23a1a6bedac4be281c5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191613fba6a5b20856a4112533786ea84b04713389b23a1a6bedac4be281c5ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:20Z\\\",\\\"message\\\":\\\"3] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 10:09:20.139644 6457 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 10:09:20.139655 6457 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0219 10:09:20.139663 6457 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0219 10:09:20.139670 6457 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 10:09:20.139631 6457 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0219 10:09:20.139684 6457 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 10:09:20.139696 6457 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 398.819µs)\\\\nF0219 10:09:20.139608 6457 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInfo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ln785_openshift-ovn-kubernetes(554fedad-397a-4827-b5ad-48a681a7a2e5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ln785\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:45Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.327470 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.327502 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.327511 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.327526 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.327536 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:45Z","lastTransitionTime":"2026-02-19T10:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.336866 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n5z6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d4fd8e-224d-468e-b6e2-a292a6e1949f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b57g5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b57g5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n5z6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:45Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.348955 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:45Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.362979 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4b8138d1d6b88a57900c770e6553635cf6712d2a43061747bf2e1fe18f4a34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI0219 10:08:54.897941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0219 10:08:54.897951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0219 10:08:54.898073 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771495718\\\\\\\\\\\\\\\" (2026-02-19 10:08:37 +0000 UTC to 2026-03-21 10:08:38 +0000 UTC (now=2026-02-19 10:08:54.898015179 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898163 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\"\\\\nI0219 10:08:54.898747 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771495728\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771495728\\\\\\\\\\\\\\\" (2026-02-19 09:08:48 +0000 UTC to 2027-02-19 09:08:48 +0000 UTC (now=2026-02-19 10:08:54.898701245 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898777 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0219 10:08:54.898804 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0219 10:08:54.898823 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 10:08:54.900253 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900328 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900654 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0219 10:08:54.907320 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:45Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.380647 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9baac849-65d8-4d8e-90cc-1a82538dc848\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820be18f53ce1e3de635aae2c235a7522e55358347ad227db441688e38eff7a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f98f3b7146dd32290f30820bdf650e17aa3147f6fa4aedce1a59301c70bc7070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b78db08599819faea15066929a2406d58aeb5abf2ec530238ac08f02cb937111\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da14e1ea932ad641416cf7d2d3702ac5fecbee71b67ed85ad5dc9e17d6ba067b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da14e1ea932ad641416cf7d2d3702ac5fecbee71b67ed85ad5dc9e17d6ba067b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:45Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.402219 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3788443dd8b315ed6c0e0f94f9ec1524525bde8a1bfdab4096a4af6b17155977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:45Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.418393 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nw57r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5177375f-9a2c-4430-8c2e-307ddf187cb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd28eb1f2ce842c3a00d30faed2acaaf4d4406342e9dc7f88595be430d0423ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5phq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nw57r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:45Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.430486 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.430532 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.430545 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.430566 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.430581 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:45Z","lastTransitionTime":"2026-02-19T10:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.435342 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c18ee97d5e19f635541477ce6c6eb5fe1417551582bb34c9e0ec77a134d660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:45Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.456748 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:45Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.473066 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8ede5032c6812f2b019538d253d2c07cf1659f30fcc4c52b27e6a091fb0758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c005a321618306472457a36ad24c5550bd0d2320bd395d6c022a84a7fdebbb01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:45Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.487128 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e102b35eadbc15e015f695e0f25f51576d5a023de5326f37421223e711ac407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:45Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.499770 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e42c80-bece-4066-abad-05bc661d33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac23aaef2d47d83268a51b9efb514cc10a9fdb47d2304f5cabf31c2ae46abfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v97zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:45Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.533924 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.533978 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.533990 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.534005 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.534015 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:45Z","lastTransitionTime":"2026-02-19T10:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.555295 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 14:04:11.656861555 +0000 UTC Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.583169 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.583188 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.583378 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:09:45 crc kubenswrapper[4811]: E0219 10:09:45.583493 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:09:45 crc kubenswrapper[4811]: E0219 10:09:45.583637 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:09:45 crc kubenswrapper[4811]: E0219 10:09:45.583764 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.636789 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.636832 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.636843 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.636863 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.636878 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:45Z","lastTransitionTime":"2026-02-19T10:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.739850 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.739896 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.739907 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.739925 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.739943 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:45Z","lastTransitionTime":"2026-02-19T10:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.842774 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.842830 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.842846 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.842866 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.842882 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:45Z","lastTransitionTime":"2026-02-19T10:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.946522 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.946571 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.946583 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.946604 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:45 crc kubenswrapper[4811]: I0219 10:09:45.946619 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:45Z","lastTransitionTime":"2026-02-19T10:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.050049 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.050091 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.050107 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.050126 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.050139 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:46Z","lastTransitionTime":"2026-02-19T10:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.152914 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.152953 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.152963 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.152981 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.152993 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:46Z","lastTransitionTime":"2026-02-19T10:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.240136 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lxbrg_9766aa7e-a66b-4efc-a5de-2fe49ef00eb8/kube-multus/0.log" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.240199 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lxbrg" event={"ID":"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8","Type":"ContainerStarted","Data":"5467aad5459e55d3a7720c8a92bcabb265205bd73be8b21ab2023463f50066cd"} Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.255226 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c18ee97d5e19f635541477ce6c6eb5fe1417551582bb34c9e0ec77a134d660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:46Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.255344 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.255401 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.255416 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.255462 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.255479 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:46Z","lastTransitionTime":"2026-02-19T10:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.269866 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:46Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.282516 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8ede5032c6812f2b019538d253d2c07cf1659f30fcc4c52b27e6a091fb0758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c005a321618306472457a36ad24c5550bd0d2320bd395d6c022a84a7fdebbb01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:46Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.294636 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e102b35eadbc15e015f695e0f25f51576d5a023de5326f37421223e711ac407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:46Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.306831 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e42c80-bece-4066-abad-05bc661d33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac23aaef2d47d83268a51b9efb514cc10a9fdb47d2304f5cabf31c2ae46abfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v97zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:46Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.319300 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:46Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.336428 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxbrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5467aad5459e55d3a7720c8a92bcabb265205bd73be8b21ab2023463f50066cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:44Z\\\",\\\"message\\\":\\\"2026-02-19T10:08:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d1e4d1f1-73f8-4a77-a08a-dbfba1595670\\\\n2026-02-19T10:08:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d1e4d1f1-73f8-4a77-a08a-dbfba1595670 to /host/opt/cni/bin/\\\\n2026-02-19T10:08:59Z [verbose] multus-daemon started\\\\n2026-02-19T10:08:59Z [verbose] Readiness Indicator file check\\\\n2026-02-19T10:09:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trc25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxbrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:46Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.347721 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sx75t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c742c05c-adc4-491d-994f-036af2fda249\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e49365be7e1db37aca235d55ae33418e89574caaa0ff6a67b0ee3a950e734220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f41e09b6058e874ace3c98bd66981467c7d16ba4df94700158d866e48ffb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sx75t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:46Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.358265 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.358319 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.358330 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.358378 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.358393 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:46Z","lastTransitionTime":"2026-02-19T10:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.359416 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:46Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.374970 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98110a48-08c0-4227-93dc-cfadaa45b044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec84e9992f678e60bd39671c08434036c0ebd6746e540b35e3b7afb638d1836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9s768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:46Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.435512 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"554fedad-397a-4827-b5ad-48a681a7a2e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://191613fba6a5b20856a4112533786ea84b04713389b23a1a6bedac4be281c5ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191613fba6a5b20856a4112533786ea84b04713389b23a1a6bedac4be281c5ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:20Z\\\",\\\"message\\\":\\\"3] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 10:09:20.139644 6457 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 10:09:20.139655 6457 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0219 10:09:20.139663 6457 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0219 10:09:20.139670 6457 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 10:09:20.139631 6457 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0219 10:09:20.139684 6457 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 10:09:20.139696 6457 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 398.819µs)\\\\nF0219 10:09:20.139608 6457 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInfo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ln785_openshift-ovn-kubernetes(554fedad-397a-4827-b5ad-48a681a7a2e5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ln785\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:46Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.454778 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n5z6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d4fd8e-224d-468e-b6e2-a292a6e1949f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b57g5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b57g5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n5z6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:46Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.466958 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.467015 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.467029 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.467052 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.467253 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:46Z","lastTransitionTime":"2026-02-19T10:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.476712 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:46Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.501761 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4b8138d1d6b88a57900c770e6553635cf6712d2a43061747bf2e1fe18f4a34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI0219 10:08:54.897941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0219 10:08:54.897951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0219 10:08:54.898073 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771495718\\\\\\\\\\\\\\\" (2026-02-19 10:08:37 +0000 UTC to 2026-03-21 10:08:38 +0000 UTC (now=2026-02-19 10:08:54.898015179 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898163 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\"\\\\nI0219 10:08:54.898747 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771495728\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771495728\\\\\\\\\\\\\\\" (2026-02-19 09:08:48 +0000 UTC to 2027-02-19 09:08:48 +0000 UTC (now=2026-02-19 10:08:54.898701245 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898777 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0219 10:08:54.898804 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0219 10:08:54.898823 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 10:08:54.900253 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900328 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900654 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0219 10:08:54.907320 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:46Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.515509 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9baac849-65d8-4d8e-90cc-1a82538dc848\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820be18f53ce1e3de635aae2c235a7522e55358347ad227db441688e38eff7a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f98f3b7146dd32290f30820bdf650e17aa3147f6fa4aedce1a59301c70bc7070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b78db08599819faea15066929a2406d58aeb5abf2ec530238ac08f02cb937111\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da14e1ea932ad641416cf7d2d3702ac5fecbee71b67ed85ad5dc9e17d6ba067b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da14e1ea932ad641416cf7d2d3702ac5fecbee71b67ed85ad5dc9e17d6ba067b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:46Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.529637 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3788443dd8b315ed6c0e0f94f9ec1524525bde8a1bfdab4096a4af6b17155977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:46Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.543301 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nw57r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5177375f-9a2c-4430-8c2e-307ddf187cb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd28eb1f2ce842c3a00d30faed2acaaf4d4406342e9dc7f88595be430d0423ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5phq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nw57r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:46Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.555702 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 17:33:01.074399373 +0000 UTC Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.569836 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.569870 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.569879 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.569898 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.569914 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:46Z","lastTransitionTime":"2026-02-19T10:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.582686 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:09:46 crc kubenswrapper[4811]: E0219 10:09:46.583133 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.583372 4811 scope.go:117] "RemoveContainer" containerID="191613fba6a5b20856a4112533786ea84b04713389b23a1a6bedac4be281c5ba" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.671898 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.671930 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.671941 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.671955 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.671965 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:46Z","lastTransitionTime":"2026-02-19T10:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.774005 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.774031 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.774042 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.774056 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.774066 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:46Z","lastTransitionTime":"2026-02-19T10:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.876842 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.876882 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.876893 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.876909 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.876920 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:46Z","lastTransitionTime":"2026-02-19T10:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.979604 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.979672 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.979688 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.979712 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:46 crc kubenswrapper[4811]: I0219 10:09:46.979727 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:46Z","lastTransitionTime":"2026-02-19T10:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.082025 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.082073 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.082082 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.082097 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.082107 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:47Z","lastTransitionTime":"2026-02-19T10:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.088493 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.088553 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.088569 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.088587 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.088599 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:47Z","lastTransitionTime":"2026-02-19T10:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:47 crc kubenswrapper[4811]: E0219 10:09:47.103559 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aa043662-c579-48da-bd33-d995796f09b6\\\",\\\"systemUUID\\\":\\\"0ae76a21-a2ee-4264-8d70-e6dc0f503661\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:47Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.107541 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.107578 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.107590 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.107606 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.107617 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:47Z","lastTransitionTime":"2026-02-19T10:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:47 crc kubenswrapper[4811]: E0219 10:09:47.121153 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aa043662-c579-48da-bd33-d995796f09b6\\\",\\\"systemUUID\\\":\\\"0ae76a21-a2ee-4264-8d70-e6dc0f503661\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:47Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.125495 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.125530 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.125541 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.125556 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.125566 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:47Z","lastTransitionTime":"2026-02-19T10:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:47 crc kubenswrapper[4811]: E0219 10:09:47.138424 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aa043662-c579-48da-bd33-d995796f09b6\\\",\\\"systemUUID\\\":\\\"0ae76a21-a2ee-4264-8d70-e6dc0f503661\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:47Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.141940 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.141973 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.141984 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.142000 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.142010 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:47Z","lastTransitionTime":"2026-02-19T10:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:47 crc kubenswrapper[4811]: E0219 10:09:47.154741 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aa043662-c579-48da-bd33-d995796f09b6\\\",\\\"systemUUID\\\":\\\"0ae76a21-a2ee-4264-8d70-e6dc0f503661\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:47Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.158419 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.158470 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.158482 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.158506 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.158517 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:47Z","lastTransitionTime":"2026-02-19T10:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:47 crc kubenswrapper[4811]: E0219 10:09:47.173137 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aa043662-c579-48da-bd33-d995796f09b6\\\",\\\"systemUUID\\\":\\\"0ae76a21-a2ee-4264-8d70-e6dc0f503661\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:47Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:47 crc kubenswrapper[4811]: E0219 10:09:47.173278 4811 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.184675 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.184724 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.184736 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.184753 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.184765 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:47Z","lastTransitionTime":"2026-02-19T10:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.244597 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ln785_554fedad-397a-4827-b5ad-48a681a7a2e5/ovnkube-controller/2.log" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.247042 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" event={"ID":"554fedad-397a-4827-b5ad-48a681a7a2e5","Type":"ContainerStarted","Data":"89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910"} Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.247403 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.271774 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c18ee97d5e19f635541477ce6c6eb5fe1417551582bb34c9e0ec77a134d660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:47Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.286911 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.286950 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.286961 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.286977 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.286987 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:47Z","lastTransitionTime":"2026-02-19T10:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.291133 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:47Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.307334 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8ede5032c6812f2b019538d253d2c07cf1659f30fcc4c52b27e6a091fb0758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c005a321618306472457a36ad24c5550bd0d2320bd395d6c022a84a7fdebbb01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:47Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.322554 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e102b35eadbc15e015f695e0f25f51576d5a023de5326f37421223e711ac407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:47Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.338416 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e42c80-bece-4066-abad-05bc661d33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac23aaef2d47d83268a51b9efb514cc10a9fdb47d2304f5cabf31c2ae46abfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v97zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:47Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.355607 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:47Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.367738 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxbrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5467aad5459e55d3a7720c8a92bcabb265205bd73be8b21ab2023463f50066cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:44Z\\\",\\\"message\\\":\\\"2026-02-19T10:08:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d1e4d1f1-73f8-4a77-a08a-dbfba1595670\\\\n2026-02-19T10:08:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d1e4d1f1-73f8-4a77-a08a-dbfba1595670 to /host/opt/cni/bin/\\\\n2026-02-19T10:08:59Z [verbose] multus-daemon started\\\\n2026-02-19T10:08:59Z [verbose] Readiness Indicator file check\\\\n2026-02-19T10:09:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trc25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxbrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:47Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.380917 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sx75t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c742c05c-adc4-491d-994f-036af2fda249\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e49365be7e1db37aca235d55ae33418e89574caaa0ff6a67b0ee3a950e734220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f41e09b6058e874ace3c98bd66981467c7d16ba4df94700158d866e48ffb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sx75t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:47Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.388954 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.388997 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.389007 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.389028 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.389073 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:47Z","lastTransitionTime":"2026-02-19T10:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.397827 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:47Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.415955 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98110a48-08c0-4227-93dc-cfadaa45b044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec84e9992f678e60bd39671c08434036c0ebd6746e540b35e3b7afb638d1836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9s768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:47Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.439843 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"554fedad-397a-4827-b5ad-48a681a7a2e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191613fba6a5b20856a4112533786ea84b04713389b23a1a6bedac4be281c5ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:20Z\\\",\\\"message\\\":\\\"3] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 10:09:20.139644 6457 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 10:09:20.139655 6457 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0219 10:09:20.139663 6457 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0219 10:09:20.139670 6457 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 10:09:20.139631 6457 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0219 10:09:20.139684 6457 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 10:09:20.139696 6457 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 398.819µs)\\\\nF0219 10:09:20.139608 6457 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInfo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ln785\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:47Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.457194 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n5z6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d4fd8e-224d-468e-b6e2-a292a6e1949f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b57g5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b57g5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n5z6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:47Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.471221 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:47Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.488724 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4b8138d1d6b88a57900c770e6553635cf6712d2a43061747bf2e1fe18f4a34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI0219 10:08:54.897941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0219 10:08:54.897951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0219 10:08:54.898073 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771495718\\\\\\\\\\\\\\\" (2026-02-19 10:08:37 +0000 UTC to 2026-03-21 10:08:38 +0000 UTC (now=2026-02-19 10:08:54.898015179 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898163 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\"\\\\nI0219 10:08:54.898747 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771495728\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771495728\\\\\\\\\\\\\\\" (2026-02-19 09:08:48 +0000 UTC to 2027-02-19 09:08:48 +0000 UTC (now=2026-02-19 10:08:54.898701245 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898777 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0219 10:08:54.898804 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0219 10:08:54.898823 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 10:08:54.900253 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900328 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900654 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0219 10:08:54.907320 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:47Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.492163 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.492211 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.492224 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.492244 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.492256 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:47Z","lastTransitionTime":"2026-02-19T10:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.502821 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9baac849-65d8-4d8e-90cc-1a82538dc848\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820be18f53ce1e3de635aae2c235a7522e55358347ad227db441688e38eff7a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f98f3b7146dd32290f30820bdf650e17aa3147f6fa4aedce1a59301c70bc7070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b78db08599819faea15066929a2406d58aeb5abf2ec530238ac08f02cb937111\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da14e1ea932ad641416cf7d2d3702ac5fecbee71b67ed85ad5dc9e17d6ba067b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da14e1ea932ad641416cf7d2d3702ac5fecbee71b67ed85ad5dc9e17d6ba067b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:47Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.517695 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3788443dd8b315ed6c0e0f94f9ec1524525bde8a1bfdab4096a4af6b17155977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:47Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.531991 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nw57r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5177375f-9a2c-4430-8c2e-307ddf187cb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd28eb1f2ce842c3a00d30faed2acaaf4d4406342e9dc7f88595be430d0423ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5phq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nw57r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:47Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.556606 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 07:43:34.926845449 +0000 UTC Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.582987 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:09:47 crc kubenswrapper[4811]: E0219 10:09:47.583099 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.583157 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:09:47 crc kubenswrapper[4811]: E0219 10:09:47.583196 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.583228 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:09:47 crc kubenswrapper[4811]: E0219 10:09:47.583266 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.594949 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.594971 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.594981 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.594994 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.595005 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:47Z","lastTransitionTime":"2026-02-19T10:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.697553 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.697611 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.697631 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.697657 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.697671 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:47Z","lastTransitionTime":"2026-02-19T10:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.800802 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.800842 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.800853 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.800871 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.800902 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:47Z","lastTransitionTime":"2026-02-19T10:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.903547 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.903604 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.903616 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.903640 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:47 crc kubenswrapper[4811]: I0219 10:09:47.903656 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:47Z","lastTransitionTime":"2026-02-19T10:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.006940 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.007007 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.007025 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.007049 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.007066 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:48Z","lastTransitionTime":"2026-02-19T10:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.110732 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.110805 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.110823 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.110844 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.110857 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:48Z","lastTransitionTime":"2026-02-19T10:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.246810 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.246850 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.246859 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.246877 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.246890 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:48Z","lastTransitionTime":"2026-02-19T10:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.251166 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ln785_554fedad-397a-4827-b5ad-48a681a7a2e5/ovnkube-controller/3.log" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.251881 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ln785_554fedad-397a-4827-b5ad-48a681a7a2e5/ovnkube-controller/2.log" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.254152 4811 generic.go:334] "Generic (PLEG): container finished" podID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerID="89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910" exitCode=1 Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.254216 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" event={"ID":"554fedad-397a-4827-b5ad-48a681a7a2e5","Type":"ContainerDied","Data":"89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910"} Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.254274 4811 scope.go:117] "RemoveContainer" containerID="191613fba6a5b20856a4112533786ea84b04713389b23a1a6bedac4be281c5ba" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.255287 4811 scope.go:117] "RemoveContainer" containerID="89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910" Feb 19 10:09:48 crc kubenswrapper[4811]: E0219 10:09:48.255527 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ln785_openshift-ovn-kubernetes(554fedad-397a-4827-b5ad-48a681a7a2e5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.274440 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:48Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.289412 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98110a48-08c0-4227-93dc-cfadaa45b044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec84e9992f678e60bd39671c08434036c0ebd6746e540b35e3b7afb638d1836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9s768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:48Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.309844 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"554fedad-397a-4827-b5ad-48a681a7a2e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://191613fba6a5b20856a4112533786ea84b04713389b23a1a6bedac4be281c5ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:20Z\\\",\\\"message\\\":\\\"3] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 10:09:20.139644 6457 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 10:09:20.139655 6457 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0219 10:09:20.139663 6457 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0219 10:09:20.139670 6457 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 10:09:20.139631 6457 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0219 10:09:20.139684 6457 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0219 10:09:20.139696 6457 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 398.819µs)\\\\nF0219 10:09:20.139608 6457 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInfo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"message\\\":\\\"r 2 for removal\\\\nI0219 10:09:47.457252 6865 factory.go:656] Stopping watch factory\\\\nI0219 10:09:47.457280 6865 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 10:09:47.457367 6865 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 10:09:47.458036 6865 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 10:09:47.458192 6865 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 10:09:47.458517 6865 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 10:09:47.479965 6865 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0219 10:09:47.480014 6865 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0219 10:09:47.480134 6865 ovnkube.go:599] Stopped ovnkube\\\\nI0219 10:09:47.480173 6865 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 10:09:47.480297 6865 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ln785\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:48Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.321785 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n5z6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d4fd8e-224d-468e-b6e2-a292a6e1949f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b57g5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b57g5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n5z6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:48Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.334944 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:48Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.349480 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.349513 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.349521 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.349534 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.349542 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:48Z","lastTransitionTime":"2026-02-19T10:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.351574 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4b8138d1d6b88a57900c770e6553635cf6712d2a43061747bf2e1fe18f4a34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI0219 10:08:54.897941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0219 10:08:54.897951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0219 10:08:54.898073 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771495718\\\\\\\\\\\\\\\" (2026-02-19 10:08:37 +0000 UTC to 2026-03-21 10:08:38 +0000 UTC (now=2026-02-19 10:08:54.898015179 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898163 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\"\\\\nI0219 10:08:54.898747 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771495728\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771495728\\\\\\\\\\\\\\\" (2026-02-19 09:08:48 +0000 UTC to 2027-02-19 09:08:48 +0000 UTC (now=2026-02-19 10:08:54.898701245 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898777 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0219 10:08:54.898804 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0219 10:08:54.898823 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 10:08:54.900253 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900328 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900654 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0219 10:08:54.907320 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:48Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.364829 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9baac849-65d8-4d8e-90cc-1a82538dc848\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820be18f53ce1e3de635aae2c235a7522e55358347ad227db441688e38eff7a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f98f3b7146dd32290f30820bdf650e17aa3147f6fa4aedce1a59301c70bc7070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b78db08599819faea15066929a2406d58aeb5abf2ec530238ac08f02cb937111\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da14e1ea932ad641416cf7d2d3702ac5fecbee71b67ed85ad5dc9e17d6ba067b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da14e1ea932ad641416cf7d2d3702ac5fecbee71b67ed85ad5dc9e17d6ba067b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:48Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.377328 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3788443dd8b315ed6c0e0f94f9ec1524525bde8a1bfdab4096a4af6b17155977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:48Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.390262 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nw57r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5177375f-9a2c-4430-8c2e-307ddf187cb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd28eb1f2ce842c3a00d30faed2acaaf4d4406342e9dc7f88595be430d0423ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5phq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nw57r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:48Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.404353 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c18ee97d5e19f635541477ce6c6eb5fe1417551582bb34c9e0ec77a134d660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:48Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.417664 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:48Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.431480 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8ede5032c6812f2b019538d253d2c07cf1659f30fcc4c52b27e6a091fb0758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c005a321618306472457a36ad24c5550bd0d2320bd395d6c022a84a7fdebbb01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:48Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.445575 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e102b35eadbc15e015f695e0f25f51576d5a023de5326f37421223e711ac407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:48Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.451488 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.451536 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.451547 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.451567 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.451577 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:48Z","lastTransitionTime":"2026-02-19T10:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.458186 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e42c80-bece-4066-abad-05bc661d33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac23aaef2d47d83268a51b9efb514cc10a9fdb47d2304f5cabf31c2ae46abfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v97zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:48Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.471244 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:48Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.483825 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxbrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5467aad5459e55d3a7720c8a92bcabb265205bd73be8b21ab2023463f50066cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:44Z\\\",\\\"message\\\":\\\"2026-02-19T10:08:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d1e4d1f1-73f8-4a77-a08a-dbfba1595670\\\\n2026-02-19T10:08:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d1e4d1f1-73f8-4a77-a08a-dbfba1595670 to /host/opt/cni/bin/\\\\n2026-02-19T10:08:59Z [verbose] multus-daemon started\\\\n2026-02-19T10:08:59Z [verbose] Readiness Indicator file check\\\\n2026-02-19T10:09:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trc25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxbrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:48Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.494819 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sx75t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c742c05c-adc4-491d-994f-036af2fda249\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e49365be7e1db37aca235d55ae33418e89574caaa0ff6a67b0ee3a950e734220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f41e09b6058e874ace3c98bd66981467c7d16ba4df94700158d866e48ffb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sx75t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:48Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.553865 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.553924 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.553941 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.553963 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.553982 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:48Z","lastTransitionTime":"2026-02-19T10:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.557047 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 19:58:29.710618971 +0000 UTC Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.582483 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:09:48 crc kubenswrapper[4811]: E0219 10:09:48.582680 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.656218 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.656267 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.656279 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.656296 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.656307 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:48Z","lastTransitionTime":"2026-02-19T10:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.759869 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.759953 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.759967 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.759992 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.760007 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:48Z","lastTransitionTime":"2026-02-19T10:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.863854 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.863896 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.863907 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.863923 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.863933 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:48Z","lastTransitionTime":"2026-02-19T10:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.966638 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.966696 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.966718 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.966746 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:48 crc kubenswrapper[4811]: I0219 10:09:48.966769 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:48Z","lastTransitionTime":"2026-02-19T10:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.069731 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.069788 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.069823 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.069852 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.069873 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:49Z","lastTransitionTime":"2026-02-19T10:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.172133 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.172193 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.172218 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.172261 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.172277 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:49Z","lastTransitionTime":"2026-02-19T10:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.260087 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ln785_554fedad-397a-4827-b5ad-48a681a7a2e5/ovnkube-controller/3.log" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.266103 4811 scope.go:117] "RemoveContainer" containerID="89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910" Feb 19 10:09:49 crc kubenswrapper[4811]: E0219 10:09:49.266305 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ln785_openshift-ovn-kubernetes(554fedad-397a-4827-b5ad-48a681a7a2e5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.275090 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.275139 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.275155 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.275179 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.275200 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:49Z","lastTransitionTime":"2026-02-19T10:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.281381 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e42c80-bece-4066-abad-05bc661d33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac23aaef2d47d83268a51b9efb514cc10a9fdb47d2304f5cabf31c2ae46abfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v97zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:49Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.296041 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c18ee97d5e19f635541477ce6c6eb5fe1417551582bb34c9e0ec77a134d660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:49Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.310742 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:49Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.324241 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8ede5032c6812f2b019538d253d2c07cf1659f30fcc4c52b27e6a091fb0758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c005a321618306472457a36ad24c5550bd0d2320bd395d6c022a84a7fdebbb01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:49Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.337098 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e102b35eadbc15e015f695e0f25f51576d5a023de5326f37421223e711ac407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:49Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.349963 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:49Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.367825 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxbrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5467aad5459e55d3a7720c8a92bcabb265205bd73be8b21ab2023463f50066cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:44Z\\\",\\\"message\\\":\\\"2026-02-19T10:08:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d1e4d1f1-73f8-4a77-a08a-dbfba1595670\\\\n2026-02-19T10:08:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d1e4d1f1-73f8-4a77-a08a-dbfba1595670 to /host/opt/cni/bin/\\\\n2026-02-19T10:08:59Z [verbose] multus-daemon started\\\\n2026-02-19T10:08:59Z [verbose] Readiness Indicator file check\\\\n2026-02-19T10:09:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trc25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxbrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:49Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.377160 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.377201 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.377213 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.377228 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.377238 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:49Z","lastTransitionTime":"2026-02-19T10:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.382918 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sx75t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c742c05c-adc4-491d-994f-036af2fda249\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e49365be7e1db37aca235d55ae33418e89574caaa0ff6a67b0ee3a950e734220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f41e09b6058e874ace3c98bd66981467c7d16ba4df94700158d866e48ffb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sx75t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:49Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.398300 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n5z6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d4fd8e-224d-468e-b6e2-a292a6e1949f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b57g5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b57g5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n5z6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:49Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.414691 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:49Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.437648 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98110a48-08c0-4227-93dc-cfadaa45b044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec84e9992f678e60bd39671c08434036c0ebd6746e540b35e3b7afb638d1836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9s768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:49Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.458072 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"554fedad-397a-4827-b5ad-48a681a7a2e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"message\\\":\\\"r 2 for removal\\\\nI0219 10:09:47.457252 6865 factory.go:656] Stopping watch factory\\\\nI0219 10:09:47.457280 6865 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 10:09:47.457367 6865 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 10:09:47.458036 6865 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 10:09:47.458192 6865 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 10:09:47.458517 6865 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 10:09:47.479965 6865 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0219 10:09:47.480014 6865 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0219 10:09:47.480134 6865 ovnkube.go:599] Stopped ovnkube\\\\nI0219 10:09:47.480173 6865 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 10:09:47.480297 6865 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ln785_openshift-ovn-kubernetes(554fedad-397a-4827-b5ad-48a681a7a2e5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ln785\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:49Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.468372 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nw57r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5177375f-9a2c-4430-8c2e-307ddf187cb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd28eb1f2ce842c3a00d30faed2acaaf4d4406342e9dc7f88595be430d0423ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5phq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nw57r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:49Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.479711 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.479753 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.479762 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.479797 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.479809 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:49Z","lastTransitionTime":"2026-02-19T10:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.482763 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:49Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.496774 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4b8138d1d6b88a57900c770e6553635cf6712d2a43061747bf2e1fe18f4a34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI0219 10:08:54.897941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0219 10:08:54.897951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0219 10:08:54.898073 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771495718\\\\\\\\\\\\\\\" (2026-02-19 10:08:37 +0000 UTC to 2026-03-21 10:08:38 +0000 UTC (now=2026-02-19 10:08:54.898015179 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898163 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\"\\\\nI0219 10:08:54.898747 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771495728\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771495728\\\\\\\\\\\\\\\" (2026-02-19 09:08:48 +0000 UTC to 2027-02-19 09:08:48 +0000 UTC (now=2026-02-19 10:08:54.898701245 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898777 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0219 10:08:54.898804 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0219 10:08:54.898823 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 10:08:54.900253 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900328 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900654 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0219 10:08:54.907320 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:49Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.507761 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9baac849-65d8-4d8e-90cc-1a82538dc848\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820be18f53ce1e3de635aae2c235a7522e55358347ad227db441688e38eff7a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f98f3b7146dd32290f30820bdf650e17aa3147f6fa4aedce1a59301c70bc7070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b78db08599819faea15066929a2406d58aeb5abf2ec530238ac08f02cb937111\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da14e1ea932ad641416cf7d2d3702ac5fecbee71b67ed85ad5dc9e17d6ba067b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da14e1ea932ad641416cf7d2d3702ac5fecbee71b67ed85ad5dc9e17d6ba067b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:49Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.518462 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3788443dd8b315ed6c0e0f94f9ec1524525bde8a1bfdab4096a4af6b17155977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:49Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.558083 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 18:37:35.847896484 +0000 UTC Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.582157 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:09:49 crc kubenswrapper[4811]: E0219 10:09:49.582321 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.582398 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.582509 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.582581 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.582617 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.582634 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.582661 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.582678 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:49Z","lastTransitionTime":"2026-02-19T10:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:49 crc kubenswrapper[4811]: E0219 10:09:49.582621 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:09:49 crc kubenswrapper[4811]: E0219 10:09:49.582785 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.685309 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.685386 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.685404 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.685431 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.685474 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:49Z","lastTransitionTime":"2026-02-19T10:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.787589 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.787638 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.787652 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.787669 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.787680 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:49Z","lastTransitionTime":"2026-02-19T10:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.890865 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.890929 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.890944 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.890971 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.890988 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:49Z","lastTransitionTime":"2026-02-19T10:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.994173 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.994222 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.994233 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.994247 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:49 crc kubenswrapper[4811]: I0219 10:09:49.994255 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:49Z","lastTransitionTime":"2026-02-19T10:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.097532 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.097582 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.097595 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.097614 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.097625 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:50Z","lastTransitionTime":"2026-02-19T10:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.201168 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.201267 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.201287 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.201317 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.201365 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:50Z","lastTransitionTime":"2026-02-19T10:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.303772 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.303837 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.303853 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.303876 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.303893 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:50Z","lastTransitionTime":"2026-02-19T10:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.405861 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.405924 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.405941 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.405967 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.405987 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:50Z","lastTransitionTime":"2026-02-19T10:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.508283 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.508346 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.508359 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.508377 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.508398 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:50Z","lastTransitionTime":"2026-02-19T10:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.558397 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 10:11:21.402085688 +0000 UTC Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.582862 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:09:50 crc kubenswrapper[4811]: E0219 10:09:50.583044 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.616822 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.616881 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.616893 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.616917 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.616929 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:50Z","lastTransitionTime":"2026-02-19T10:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.719850 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.719878 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.719886 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.719903 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.719915 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:50Z","lastTransitionTime":"2026-02-19T10:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.913847 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.913933 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.913994 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.914030 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:50 crc kubenswrapper[4811]: I0219 10:09:50.914054 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:50Z","lastTransitionTime":"2026-02-19T10:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.016141 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.016199 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.016216 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.016248 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.016284 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:51Z","lastTransitionTime":"2026-02-19T10:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.119077 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.119134 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.119145 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.119163 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.119545 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:51Z","lastTransitionTime":"2026-02-19T10:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.222922 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.222976 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.222991 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.223011 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.223026 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:51Z","lastTransitionTime":"2026-02-19T10:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.328914 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.328967 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.328980 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.328999 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.329011 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:51Z","lastTransitionTime":"2026-02-19T10:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.430817 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.430853 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.430862 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.430874 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.430884 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:51Z","lastTransitionTime":"2026-02-19T10:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.533750 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.533786 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.533794 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.533808 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.533817 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:51Z","lastTransitionTime":"2026-02-19T10:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.559258 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 16:13:19.812469764 +0000 UTC Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.582909 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.582944 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.582969 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:09:51 crc kubenswrapper[4811]: E0219 10:09:51.583069 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:09:51 crc kubenswrapper[4811]: E0219 10:09:51.583114 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:09:51 crc kubenswrapper[4811]: E0219 10:09:51.583166 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.636408 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.636487 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.636509 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.636531 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.636545 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:51Z","lastTransitionTime":"2026-02-19T10:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.739052 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.739160 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.739178 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.739231 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.739249 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:51Z","lastTransitionTime":"2026-02-19T10:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.843049 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.843112 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.843151 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.843181 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.843206 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:51Z","lastTransitionTime":"2026-02-19T10:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.946196 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.946256 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.946269 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.946294 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:51 crc kubenswrapper[4811]: I0219 10:09:51.946310 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:51Z","lastTransitionTime":"2026-02-19T10:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.050116 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.050187 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.050205 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.050234 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.050263 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:52Z","lastTransitionTime":"2026-02-19T10:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.152950 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.153008 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.153025 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.153048 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.153063 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:52Z","lastTransitionTime":"2026-02-19T10:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.255804 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.255954 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.255998 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.256021 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.256033 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:52Z","lastTransitionTime":"2026-02-19T10:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.358307 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.358343 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.358356 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.358372 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.358383 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:52Z","lastTransitionTime":"2026-02-19T10:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.461517 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.461570 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.461588 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.461612 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.461629 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:52Z","lastTransitionTime":"2026-02-19T10:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.559884 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 14:56:33.944513325 +0000 UTC Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.564102 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.564155 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.564165 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.564191 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.564210 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:52Z","lastTransitionTime":"2026-02-19T10:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.583622 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:09:52 crc kubenswrapper[4811]: E0219 10:09:52.583925 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.600012 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:52Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.616990 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxbrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5467aad5459e55d3a7720c8a92bcabb265205bd73be8b21ab2023463f50066cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:44Z\\\",\\\"message\\\":\\\"2026-02-19T10:08:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d1e4d1f1-73f8-4a77-a08a-dbfba1595670\\\\n2026-02-19T10:08:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d1e4d1f1-73f8-4a77-a08a-dbfba1595670 to /host/opt/cni/bin/\\\\n2026-02-19T10:08:59Z [verbose] multus-daemon started\\\\n2026-02-19T10:08:59Z [verbose] Readiness Indicator file check\\\\n2026-02-19T10:09:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trc25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxbrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:52Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.631868 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sx75t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c742c05c-adc4-491d-994f-036af2fda249\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e49365be7e1db37aca235d55ae33418e89574caaa0ff6a67b0ee3a950e734220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f41e09b6058e874ace3c98bd66981467c7d16ba4df94700158d866e48ffb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sx75t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:52Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.646396 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n5z6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d4fd8e-224d-468e-b6e2-a292a6e1949f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b57g5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b57g5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n5z6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:52Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.664792 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:52Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.668605 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.668671 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.668689 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.668710 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.668736 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:52Z","lastTransitionTime":"2026-02-19T10:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.682930 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98110a48-08c0-4227-93dc-cfadaa45b044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec84e9992f678e60bd39671c08434036c0ebd6746e540b35e3b7afb638d1836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9s768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:52Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.709250 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"554fedad-397a-4827-b5ad-48a681a7a2e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"message\\\":\\\"r 2 for removal\\\\nI0219 10:09:47.457252 6865 factory.go:656] Stopping watch factory\\\\nI0219 10:09:47.457280 6865 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 10:09:47.457367 6865 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 10:09:47.458036 6865 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 10:09:47.458192 6865 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 10:09:47.458517 6865 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 10:09:47.479965 6865 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0219 10:09:47.480014 6865 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0219 10:09:47.480134 6865 ovnkube.go:599] Stopped ovnkube\\\\nI0219 10:09:47.480173 6865 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 10:09:47.480297 6865 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ln785_openshift-ovn-kubernetes(554fedad-397a-4827-b5ad-48a681a7a2e5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ln785\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:52Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.720569 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nw57r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5177375f-9a2c-4430-8c2e-307ddf187cb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd28eb1f2ce842c3a00d30faed2acaaf4d4406342e9dc7f88595be430d0423ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5phq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nw57r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:52Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.733335 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:52Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.752707 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4b8138d1d6b88a57900c770e6553635cf6712d2a43061747bf2e1fe18f4a34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI0219 10:08:54.897941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0219 10:08:54.897951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0219 10:08:54.898073 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771495718\\\\\\\\\\\\\\\" (2026-02-19 10:08:37 +0000 UTC to 2026-03-21 10:08:38 +0000 UTC (now=2026-02-19 10:08:54.898015179 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898163 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\"\\\\nI0219 10:08:54.898747 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771495728\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771495728\\\\\\\\\\\\\\\" (2026-02-19 09:08:48 +0000 UTC to 2027-02-19 09:08:48 +0000 UTC (now=2026-02-19 10:08:54.898701245 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898777 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0219 10:08:54.898804 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0219 10:08:54.898823 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 10:08:54.900253 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900328 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900654 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0219 10:08:54.907320 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:52Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.768923 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9baac849-65d8-4d8e-90cc-1a82538dc848\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820be18f53ce1e3de635aae2c235a7522e55358347ad227db441688e38eff7a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f98f3b7146dd32290f30820bdf650e17aa3147f6fa4aedce1a59301c70bc7070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b78db08599819faea15066929a2406d58aeb5abf2ec530238ac08f02cb937111\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da14e1ea932ad641416cf7d2d3702ac5fecbee71b67ed85ad5dc9e17d6ba067b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da14e1ea932ad641416cf7d2d3702ac5fecbee71b67ed85ad5dc9e17d6ba067b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:52Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.770592 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.770621 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.770633 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.770651 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.770663 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:52Z","lastTransitionTime":"2026-02-19T10:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.784809 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3788443dd8b315ed6c0e0f94f9ec1524525bde8a1bfdab4096a4af6b17155977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:52Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.797875 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e42c80-bece-4066-abad-05bc661d33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac23aaef2d47d83268a51b9efb514cc10a9fdb47d2304f5cabf31c2ae46abfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v97zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:52Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.813892 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c18ee97d5e19f635541477ce6c6eb5fe1417551582bb34c9e0ec77a134d660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:52Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.829762 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:52Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.842971 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8ede5032c6812f2b019538d253d2c07cf1659f30fcc4c52b27e6a091fb0758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c005a321618306472457a36ad24c5550bd0d2320bd395d6c022a84a7fdebbb01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:52Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.854557 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e102b35eadbc15e015f695e0f25f51576d5a023de5326f37421223e711ac407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:52Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.873530 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.873591 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.873601 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.873636 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.873648 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:52Z","lastTransitionTime":"2026-02-19T10:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.976514 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.976580 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.976603 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.976635 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:52 crc kubenswrapper[4811]: I0219 10:09:52.976657 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:52Z","lastTransitionTime":"2026-02-19T10:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.078666 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.078724 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.078740 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.078757 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.078768 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:53Z","lastTransitionTime":"2026-02-19T10:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.181213 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.181265 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.181278 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.181302 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.181316 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:53Z","lastTransitionTime":"2026-02-19T10:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.283763 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.283838 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.283855 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.283880 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.283897 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:53Z","lastTransitionTime":"2026-02-19T10:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.386777 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.387234 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.387301 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.387369 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.387437 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:53Z","lastTransitionTime":"2026-02-19T10:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.491180 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.491224 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.491237 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.491253 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.491263 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:53Z","lastTransitionTime":"2026-02-19T10:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.561117 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 19:16:37.690039538 +0000 UTC Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.582509 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.582672 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:09:53 crc kubenswrapper[4811]: E0219 10:09:53.582815 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:09:53 crc kubenswrapper[4811]: E0219 10:09:53.582927 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.583016 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:09:53 crc kubenswrapper[4811]: E0219 10:09:53.583212 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.593293 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.593334 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.593343 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.593357 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.593366 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:53Z","lastTransitionTime":"2026-02-19T10:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.695938 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.696055 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.696081 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.696108 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.696131 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:53Z","lastTransitionTime":"2026-02-19T10:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.798512 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.798550 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.798558 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.798573 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.798581 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:53Z","lastTransitionTime":"2026-02-19T10:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.901955 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.902292 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.902376 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.902470 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:53 crc kubenswrapper[4811]: I0219 10:09:53.902550 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:53Z","lastTransitionTime":"2026-02-19T10:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.005099 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.005155 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.005173 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.005197 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.005214 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:54Z","lastTransitionTime":"2026-02-19T10:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.107349 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.107391 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.107403 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.107419 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.107430 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:54Z","lastTransitionTime":"2026-02-19T10:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.210434 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.210501 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.210542 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.210557 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.210567 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:54Z","lastTransitionTime":"2026-02-19T10:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.314183 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.314261 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.314274 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.314298 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.314314 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:54Z","lastTransitionTime":"2026-02-19T10:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.417750 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.417797 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.417811 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.417833 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.417847 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:54Z","lastTransitionTime":"2026-02-19T10:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.520974 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.521035 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.521051 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.521072 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.521086 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:54Z","lastTransitionTime":"2026-02-19T10:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.561731 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 06:13:53.402822091 +0000 UTC Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.583254 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:09:54 crc kubenswrapper[4811]: E0219 10:09:54.583506 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.623999 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.624056 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.624067 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.624088 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.624102 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:54Z","lastTransitionTime":"2026-02-19T10:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.726865 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.726902 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.726914 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.726932 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.726942 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:54Z","lastTransitionTime":"2026-02-19T10:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.829086 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.829146 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.829156 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.829210 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.829222 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:54Z","lastTransitionTime":"2026-02-19T10:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.932581 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.932685 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.932709 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.932746 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:54 crc kubenswrapper[4811]: I0219 10:09:54.932768 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:54Z","lastTransitionTime":"2026-02-19T10:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.035207 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.035270 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.035287 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.035312 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.035329 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:55Z","lastTransitionTime":"2026-02-19T10:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.138107 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.138187 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.138203 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.138222 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.138233 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:55Z","lastTransitionTime":"2026-02-19T10:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.240585 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.240641 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.240652 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.240671 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.240684 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:55Z","lastTransitionTime":"2026-02-19T10:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.343463 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.343506 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.343519 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.343536 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.343548 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:55Z","lastTransitionTime":"2026-02-19T10:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.446750 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.446797 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.446807 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.446825 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.446837 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:55Z","lastTransitionTime":"2026-02-19T10:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.549463 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.549524 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.549540 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.549556 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.549568 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:55Z","lastTransitionTime":"2026-02-19T10:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.562092 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 18:43:54.646545571 +0000 UTC Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.582773 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:09:55 crc kubenswrapper[4811]: E0219 10:09:55.582923 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.582775 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.583013 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:09:55 crc kubenswrapper[4811]: E0219 10:09:55.583295 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:09:55 crc kubenswrapper[4811]: E0219 10:09:55.583518 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.652206 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.652258 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.652273 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.652294 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.652310 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:55Z","lastTransitionTime":"2026-02-19T10:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.754883 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.754919 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.754929 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.754943 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.754953 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:55Z","lastTransitionTime":"2026-02-19T10:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.857003 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.857056 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.857067 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.857084 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.857094 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:55Z","lastTransitionTime":"2026-02-19T10:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.959748 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.959840 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.959862 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.959890 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:55 crc kubenswrapper[4811]: I0219 10:09:55.959914 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:55Z","lastTransitionTime":"2026-02-19T10:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.063032 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.063080 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.063091 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.063110 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.063121 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:56Z","lastTransitionTime":"2026-02-19T10:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.166019 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.166073 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.166086 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.166107 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.166118 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:56Z","lastTransitionTime":"2026-02-19T10:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.269103 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.269146 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.269156 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.269171 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.269181 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:56Z","lastTransitionTime":"2026-02-19T10:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.371178 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.371226 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.371236 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.371252 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.371264 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:56Z","lastTransitionTime":"2026-02-19T10:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.474214 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.474279 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.474289 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.474310 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.474323 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:56Z","lastTransitionTime":"2026-02-19T10:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.562438 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 00:50:06.866734395 +0000 UTC Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.576547 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.576604 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.576613 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.576638 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.576650 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:56Z","lastTransitionTime":"2026-02-19T10:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.582928 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:09:56 crc kubenswrapper[4811]: E0219 10:09:56.583140 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.679711 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.680215 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.680227 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.680246 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.680261 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:56Z","lastTransitionTime":"2026-02-19T10:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.782738 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.782804 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.782824 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.782849 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.782868 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:56Z","lastTransitionTime":"2026-02-19T10:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.884962 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.885007 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.885019 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.885036 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.885052 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:56Z","lastTransitionTime":"2026-02-19T10:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.987752 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.987796 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.987829 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.987879 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:56 crc kubenswrapper[4811]: I0219 10:09:56.987891 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:56Z","lastTransitionTime":"2026-02-19T10:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.091239 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.091285 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.091295 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.091313 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.091321 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:57Z","lastTransitionTime":"2026-02-19T10:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.194266 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.194321 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.194335 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.194357 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.194378 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:57Z","lastTransitionTime":"2026-02-19T10:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.263557 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.263616 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.263628 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.263646 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.263657 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:57Z","lastTransitionTime":"2026-02-19T10:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:57 crc kubenswrapper[4811]: E0219 10:09:57.277765 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aa043662-c579-48da-bd33-d995796f09b6\\\",\\\"systemUUID\\\":\\\"0ae76a21-a2ee-4264-8d70-e6dc0f503661\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:57Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.282091 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.282131 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.282141 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.282157 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.282169 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:57Z","lastTransitionTime":"2026-02-19T10:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:57 crc kubenswrapper[4811]: E0219 10:09:57.304425 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aa043662-c579-48da-bd33-d995796f09b6\\\",\\\"systemUUID\\\":\\\"0ae76a21-a2ee-4264-8d70-e6dc0f503661\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:57Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.310252 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.310326 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.310342 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.310363 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.310402 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:57Z","lastTransitionTime":"2026-02-19T10:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:57 crc kubenswrapper[4811]: E0219 10:09:57.327883 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aa043662-c579-48da-bd33-d995796f09b6\\\",\\\"systemUUID\\\":\\\"0ae76a21-a2ee-4264-8d70-e6dc0f503661\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:57Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.333884 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.333953 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.333968 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.333984 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.334014 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:57Z","lastTransitionTime":"2026-02-19T10:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:57 crc kubenswrapper[4811]: E0219 10:09:57.349847 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aa043662-c579-48da-bd33-d995796f09b6\\\",\\\"systemUUID\\\":\\\"0ae76a21-a2ee-4264-8d70-e6dc0f503661\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:57Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.354389 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.354426 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.354436 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.354470 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.354482 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:57Z","lastTransitionTime":"2026-02-19T10:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:57 crc kubenswrapper[4811]: E0219 10:09:57.369989 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aa043662-c579-48da-bd33-d995796f09b6\\\",\\\"systemUUID\\\":\\\"0ae76a21-a2ee-4264-8d70-e6dc0f503661\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:09:57Z is after 2025-08-24T17:21:41Z" Feb 19 10:09:57 crc kubenswrapper[4811]: E0219 10:09:57.370130 4811 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.372186 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.372217 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.372226 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.372245 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.372259 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:57Z","lastTransitionTime":"2026-02-19T10:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.475523 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.475607 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.475619 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.475640 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.475653 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:57Z","lastTransitionTime":"2026-02-19T10:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.562856 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 08:11:18.058066262 +0000 UTC Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.579037 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.579089 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.579102 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.579123 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.579143 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:57Z","lastTransitionTime":"2026-02-19T10:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.582385 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.582385 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.582411 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:09:57 crc kubenswrapper[4811]: E0219 10:09:57.582615 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:09:57 crc kubenswrapper[4811]: E0219 10:09:57.582675 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:09:57 crc kubenswrapper[4811]: E0219 10:09:57.582752 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.682124 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.682182 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.682193 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.682214 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.682228 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:57Z","lastTransitionTime":"2026-02-19T10:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.784203 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.784237 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.784250 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.784267 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.784279 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:57Z","lastTransitionTime":"2026-02-19T10:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.888123 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.888190 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.888207 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.888228 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.888244 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:57Z","lastTransitionTime":"2026-02-19T10:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.991958 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.992075 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.992097 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.992122 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:57 crc kubenswrapper[4811]: I0219 10:09:57.992138 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:57Z","lastTransitionTime":"2026-02-19T10:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.094868 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.094935 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.094945 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.094968 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.094981 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:58Z","lastTransitionTime":"2026-02-19T10:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.197207 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.197243 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.197252 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.197265 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.197276 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:58Z","lastTransitionTime":"2026-02-19T10:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.299913 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.299991 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.300003 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.300021 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.300067 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:58Z","lastTransitionTime":"2026-02-19T10:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.402855 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.402894 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.402902 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.402918 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.402928 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:58Z","lastTransitionTime":"2026-02-19T10:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.506269 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.506320 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.506333 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.506354 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.506371 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:58Z","lastTransitionTime":"2026-02-19T10:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.563664 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 03:29:01.490631141 +0000 UTC Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.582395 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:09:58 crc kubenswrapper[4811]: E0219 10:09:58.582549 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.608902 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.608964 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.608976 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.608995 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.609119 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:58Z","lastTransitionTime":"2026-02-19T10:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.713140 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.713209 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.713231 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.713259 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.713283 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:58Z","lastTransitionTime":"2026-02-19T10:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.815974 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.816036 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.816051 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.816075 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.816091 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:58Z","lastTransitionTime":"2026-02-19T10:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.919523 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.919591 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.919607 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.919631 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:58 crc kubenswrapper[4811]: I0219 10:09:58.919650 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:58Z","lastTransitionTime":"2026-02-19T10:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.024469 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.024534 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.024550 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.024577 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.024593 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:59Z","lastTransitionTime":"2026-02-19T10:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.128359 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.128409 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.128422 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.128441 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.128484 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:59Z","lastTransitionTime":"2026-02-19T10:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.230964 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.231014 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.231029 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.231046 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.231056 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:59Z","lastTransitionTime":"2026-02-19T10:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.333995 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.334047 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.334059 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.334078 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.334091 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:59Z","lastTransitionTime":"2026-02-19T10:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.436743 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.436787 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.436797 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.436833 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.436846 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:59Z","lastTransitionTime":"2026-02-19T10:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.539386 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.539481 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.539496 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.539519 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.539533 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:59Z","lastTransitionTime":"2026-02-19T10:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.563803 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 13:47:00.698438612 +0000 UTC Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.583207 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.583244 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.583310 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:09:59 crc kubenswrapper[4811]: E0219 10:09:59.583891 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:09:59 crc kubenswrapper[4811]: E0219 10:09:59.584034 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:09:59 crc kubenswrapper[4811]: E0219 10:09:59.584236 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.599724 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.643021 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.643169 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.643194 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.643222 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.643243 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:59Z","lastTransitionTime":"2026-02-19T10:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.721761 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.722005 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:09:59 crc kubenswrapper[4811]: E0219 10:09:59.722097 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:11:03.722047829 +0000 UTC m=+152.248512525 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.722210 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:09:59 crc kubenswrapper[4811]: E0219 10:09:59.722222 4811 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 10:09:59 crc kubenswrapper[4811]: E0219 10:09:59.722395 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 10:11:03.722361176 +0000 UTC m=+152.248825902 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 10:09:59 crc kubenswrapper[4811]: E0219 10:09:59.722441 4811 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 10:09:59 crc kubenswrapper[4811]: E0219 10:09:59.722592 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 10:11:03.722565291 +0000 UTC m=+152.249030197 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.747832 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.748157 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.748234 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.748361 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.748516 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:59Z","lastTransitionTime":"2026-02-19T10:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.823093 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.823167 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:09:59 crc kubenswrapper[4811]: E0219 10:09:59.823284 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 10:09:59 crc kubenswrapper[4811]: E0219 10:09:59.823299 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 10:09:59 crc kubenswrapper[4811]: E0219 10:09:59.823310 4811 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 10:09:59 crc kubenswrapper[4811]: E0219 10:09:59.823355 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 10:11:03.823341264 +0000 UTC m=+152.349805940 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 10:09:59 crc kubenswrapper[4811]: E0219 10:09:59.823541 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 10:09:59 crc kubenswrapper[4811]: E0219 10:09:59.823559 4811 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 10:09:59 crc kubenswrapper[4811]: E0219 10:09:59.823569 4811 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 10:09:59 crc kubenswrapper[4811]: E0219 10:09:59.823592 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 10:11:03.82358515 +0000 UTC m=+152.350049836 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.851718 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.851768 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.851779 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.851799 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.851810 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:59Z","lastTransitionTime":"2026-02-19T10:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.954601 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.954660 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.954671 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.954691 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:09:59 crc kubenswrapper[4811]: I0219 10:09:59.954703 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:09:59Z","lastTransitionTime":"2026-02-19T10:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.057968 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.058022 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.058033 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.058054 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.058066 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:00Z","lastTransitionTime":"2026-02-19T10:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.160866 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.160921 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.160933 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.160954 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.160968 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:00Z","lastTransitionTime":"2026-02-19T10:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.263808 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.263873 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.263884 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.263901 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.263912 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:00Z","lastTransitionTime":"2026-02-19T10:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.366070 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.366153 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.366173 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.366199 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.366216 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:00Z","lastTransitionTime":"2026-02-19T10:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.469105 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.469156 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.469172 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.469191 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.469206 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:00Z","lastTransitionTime":"2026-02-19T10:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.564535 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 11:02:23.619686966 +0000 UTC Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.571963 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.572008 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.572020 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.572041 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.572053 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:00Z","lastTransitionTime":"2026-02-19T10:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.582576 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:10:00 crc kubenswrapper[4811]: E0219 10:10:00.582715 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.673988 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.674022 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.674030 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.674043 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.674052 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:00Z","lastTransitionTime":"2026-02-19T10:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.776371 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.776423 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.776434 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.776473 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.776488 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:00Z","lastTransitionTime":"2026-02-19T10:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.879214 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.879263 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.879275 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.879293 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.879306 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:00Z","lastTransitionTime":"2026-02-19T10:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.982645 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.982715 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.982732 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.982754 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:00 crc kubenswrapper[4811]: I0219 10:10:00.982770 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:00Z","lastTransitionTime":"2026-02-19T10:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.086057 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.086249 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.086304 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.086343 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.086367 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:01Z","lastTransitionTime":"2026-02-19T10:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.188841 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.188935 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.188958 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.188986 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.189193 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:01Z","lastTransitionTime":"2026-02-19T10:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.292393 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.292434 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.292443 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.292472 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.292484 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:01Z","lastTransitionTime":"2026-02-19T10:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.395274 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.395324 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.395341 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.395360 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.395373 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:01Z","lastTransitionTime":"2026-02-19T10:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.497991 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.498036 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.498046 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.498061 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.498072 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:01Z","lastTransitionTime":"2026-02-19T10:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.565243 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 05:06:51.412971649 +0000 UTC Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.582668 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.582739 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.582687 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:10:01 crc kubenswrapper[4811]: E0219 10:10:01.582847 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:10:01 crc kubenswrapper[4811]: E0219 10:10:01.582970 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:10:01 crc kubenswrapper[4811]: E0219 10:10:01.583156 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.600694 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.600757 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.600778 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.600801 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.600817 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:01Z","lastTransitionTime":"2026-02-19T10:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.703376 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.703473 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.703494 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.703520 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.703533 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:01Z","lastTransitionTime":"2026-02-19T10:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.806609 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.806669 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.806690 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.806718 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.806739 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:01Z","lastTransitionTime":"2026-02-19T10:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.910837 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.910924 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.910944 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.910970 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:01 crc kubenswrapper[4811]: I0219 10:10:01.910990 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:01Z","lastTransitionTime":"2026-02-19T10:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.014260 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.014341 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.014362 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.014388 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.014406 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:02Z","lastTransitionTime":"2026-02-19T10:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.117705 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.117762 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.117775 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.117796 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.117809 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:02Z","lastTransitionTime":"2026-02-19T10:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.220731 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.220811 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.220824 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.220847 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.220863 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:02Z","lastTransitionTime":"2026-02-19T10:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.323782 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.323835 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.323846 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.323866 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.323877 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:02Z","lastTransitionTime":"2026-02-19T10:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.427344 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.427435 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.427465 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.427491 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.427508 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:02Z","lastTransitionTime":"2026-02-19T10:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.530401 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.530537 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.530563 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.530594 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.530617 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:02Z","lastTransitionTime":"2026-02-19T10:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.565435 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 22:43:18.976807712 +0000 UTC Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.583293 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:10:02 crc kubenswrapper[4811]: E0219 10:10:02.584086 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.584715 4811 scope.go:117] "RemoveContainer" containerID="89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910" Feb 19 10:10:02 crc kubenswrapper[4811]: E0219 10:10:02.584952 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ln785_openshift-ovn-kubernetes(554fedad-397a-4827-b5ad-48a681a7a2e5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.598750 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bpzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"221c25e2-b834-4fe6-83e4-714328c60b4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3788443dd8b315ed6c0e0f94f9ec1524525bde8a1bfdab4096a4af6b17155977\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj9g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bpzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:10:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.612745 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nw57r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5177375f-9a2c-4430-8c2e-307ddf187cb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd28eb1f2ce842c3a00d30faed2acaaf4d4406342e9dc7f88595be430d0423ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5phq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nw57r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:10:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.631609 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e127ab42-2bc8-431a-8a19-bd4702c891a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86259814fea82a5efe6db77aa54366320dc4a9490059ca829df4d72d6b425937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094e33e1c96d3cb4b4aa5a9f8021d449dbc2bd0d653e8811b4068a9cf81ae88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cb9bb07e861adc861755c1d524fc583ab482f536c8983ff77595ebac64b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:10:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.634686 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.634732 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.634745 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.634767 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.634791 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:02Z","lastTransitionTime":"2026-02-19T10:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.652190 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62cc56d1-ce3a-4458-a5b5-de646468654f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a4b8138d1d6b88a57900c770e6553635cf6712d2a43061747bf2e1fe18f4a34\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"ion-apiserver-authentication::requestheader-client-ca-file\\\\nI0219 10:08:54.897941 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0219 10:08:54.897951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0219 10:08:54.898073 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771495718\\\\\\\\\\\\\\\" (2026-02-19 10:08:37 +0000 UTC to 2026-03-21 10:08:38 +0000 UTC (now=2026-02-19 10:08:54.898015179 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898163 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2737294845/tls.crt::/tmp/serving-cert-2737294845/tls.key\\\\\\\"\\\\nI0219 10:08:54.898747 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771495728\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771495728\\\\\\\\\\\\\\\" (2026-02-19 09:08:48 +0000 UTC to 2027-02-19 09:08:48 +0000 UTC (now=2026-02-19 10:08:54.898701245 +0000 UTC))\\\\\\\"\\\\nI0219 10:08:54.898777 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0219 10:08:54.898804 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0219 10:08:54.898823 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0219 10:08:54.900253 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900328 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0219 10:08:54.900654 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0219 10:08:54.907320 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:10:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.670441 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9baac849-65d8-4d8e-90cc-1a82538dc848\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820be18f53ce1e3de635aae2c235a7522e55358347ad227db441688e38eff7a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f98f3b7146dd32290f30820bdf650e17aa3147f6fa4aedce1a59301c70bc7070\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b78db08599819faea15066929a2406d58aeb5abf2ec530238ac08f02cb937111\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da14e1ea932ad641416cf7d2d3702ac5fecbee71b67ed85ad5dc9e17d6ba067b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da14e1ea932ad641416cf7d2d3702ac5fecbee71b67ed85ad5dc9e17d6ba067b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:10:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.688304 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e102b35eadbc15e015f695e0f25f51576d5a023de5326f37421223e711ac407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:10:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.705164 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2e42c80-bece-4066-abad-05bc661d33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ac23aaef2d47d83268a51b9efb514cc10a9fdb47d2304f5cabf31c2ae46abfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kvl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v97zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:10:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.724550 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9c18ee97d5e19f635541477ce6c6eb5fe1417551582bb34c9e0ec77a134d660\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:10:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.737035 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.737347 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.737555 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.737688 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.737831 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:02Z","lastTransitionTime":"2026-02-19T10:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.743431 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:10:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.757553 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8ede5032c6812f2b019538d253d2c07cf1659f30fcc4c52b27e6a091fb0758f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c005a321618306472457a36ad24c5550bd0d2320bd395d6c022a84a7fdebbb01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:10:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.770004 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sx75t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c742c05c-adc4-491d-994f-036af2fda249\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e49365be7e1db37aca235d55ae33418e89574caaa0ff6a67b0ee3a950e734220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1f41e09b6058e874ace3c98bd66981467c7d16ba4df94700158d866e48ffb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngn6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sx75t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:10:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.788334 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:10:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.802684 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxbrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5467aad5459e55d3a7720c8a92bcabb265205bd73be8b21ab2023463f50066cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:44Z\\\",\\\"message\\\":\\\"2026-02-19T10:08:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d1e4d1f1-73f8-4a77-a08a-dbfba1595670\\\\n2026-02-19T10:08:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d1e4d1f1-73f8-4a77-a08a-dbfba1595670 to /host/opt/cni/bin/\\\\n2026-02-19T10:08:59Z [verbose] multus-daemon started\\\\n2026-02-19T10:08:59Z [verbose] Readiness Indicator file check\\\\n2026-02-19T10:09:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-trc25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxbrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:10:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.831320 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"554fedad-397a-4827-b5ad-48a681a7a2e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T10:09:47Z\\\",\\\"message\\\":\\\"r 2 for removal\\\\nI0219 10:09:47.457252 6865 factory.go:656] Stopping watch factory\\\\nI0219 10:09:47.457280 6865 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 10:09:47.457367 6865 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 10:09:47.458036 6865 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 10:09:47.458192 6865 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 10:09:47.458517 6865 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 10:09:47.479965 6865 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0219 10:09:47.480014 6865 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0219 10:09:47.480134 6865 ovnkube.go:599] Stopped ovnkube\\\\nI0219 10:09:47.480173 6865 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 10:09:47.480297 6865 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ln785_openshift-ovn-kubernetes(554fedad-397a-4827-b5ad-48a681a7a2e5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkbcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ln785\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:10:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.841001 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.841038 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.841049 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.841066 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.841079 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:02Z","lastTransitionTime":"2026-02-19T10:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.846237 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n5z6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35d4fd8e-224d-468e-b6e2-a292a6e1949f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b57g5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b57g5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:09:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n5z6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:10:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.857474 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a562b78-a710-4eff-9b28-d2f060d91cc7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d07f52d0313d478ada71ba0ef08d6fe85e6dc1c7636b40aec2ba113b388739f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c42e1697a703d581d273e5fba5d8108164f03f6393d943b98c6c1f95b834b962\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c42e1697a703d581d273e5fba5d8108164f03f6393d943b98c6c1f95b834b962\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:10:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.869576 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:10:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.885189 4811 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9s768" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98110a48-08c0-4227-93dc-cfadaa45b044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T10:09:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec84e9992f678e60bd39671c08434036c0ebd6746e540b35e3b7afb638d1836\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T10:09:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172e87a98cfbd0a502275ccf46620d1287def64052c67836d643c55e6801e541\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a456c0eebda2f2841ffb208c8aeb11d7e8133b5f57658abcdff87c4c362b23f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:08:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:08:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9830a30d34cc3f14f1b3b81e82cd7a1eaae203c52c83692ba20055e478a86e16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3295daed5d2a8964fa8241630bcf25c9f840da4761b0df0433d940e03a0bb0e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5df67b2b09ac22e945e092082b008f52d220542a39deb28a04f6e2b3a8cd734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39834e095da983a618a28d8f5377b0fc069445d9df7e18eff9e8d126a5e61983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T10:09:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T10:09:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zx9bn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T10:08:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9s768\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T10:10:02Z is after 2025-08-24T17:21:41Z" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.943676 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.943726 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.943742 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.943766 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:02 crc kubenswrapper[4811]: I0219 10:10:02.943782 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:02Z","lastTransitionTime":"2026-02-19T10:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.046701 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.046740 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.046751 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.046770 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.046782 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:03Z","lastTransitionTime":"2026-02-19T10:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.149003 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.149063 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.149074 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.149094 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.149108 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:03Z","lastTransitionTime":"2026-02-19T10:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.251983 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.252050 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.252067 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.252101 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.252122 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:03Z","lastTransitionTime":"2026-02-19T10:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.354851 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.354891 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.354901 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.354914 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.354923 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:03Z","lastTransitionTime":"2026-02-19T10:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.457785 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.457833 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.457849 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.457866 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.457879 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:03Z","lastTransitionTime":"2026-02-19T10:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.561373 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.561440 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.561477 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.561500 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.561515 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:03Z","lastTransitionTime":"2026-02-19T10:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.565587 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 04:47:03.096454004 +0000 UTC Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.583113 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.583113 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.583141 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:10:03 crc kubenswrapper[4811]: E0219 10:10:03.583418 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:10:03 crc kubenswrapper[4811]: E0219 10:10:03.583557 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:10:03 crc kubenswrapper[4811]: E0219 10:10:03.583286 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.664672 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.664724 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.664736 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.664756 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.664768 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:03Z","lastTransitionTime":"2026-02-19T10:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.767041 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.767104 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.767127 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.767156 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.767182 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:03Z","lastTransitionTime":"2026-02-19T10:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.869831 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.869877 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.869889 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.869905 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.869917 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:03Z","lastTransitionTime":"2026-02-19T10:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.973339 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.973395 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.973412 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.973436 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:03 crc kubenswrapper[4811]: I0219 10:10:03.973484 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:03Z","lastTransitionTime":"2026-02-19T10:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.077214 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.077279 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.077289 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.077306 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.077317 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:04Z","lastTransitionTime":"2026-02-19T10:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.179785 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.179846 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.179860 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.179880 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.179892 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:04Z","lastTransitionTime":"2026-02-19T10:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.283404 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.283552 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.283582 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.283609 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.283629 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:04Z","lastTransitionTime":"2026-02-19T10:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.387030 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.387111 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.387125 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.387177 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.387192 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:04Z","lastTransitionTime":"2026-02-19T10:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.490183 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.490235 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.490244 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.490260 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.490270 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:04Z","lastTransitionTime":"2026-02-19T10:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.566131 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 03:09:10.43535227 +0000 UTC Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.582844 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:10:04 crc kubenswrapper[4811]: E0219 10:10:04.583260 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.594050 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.594126 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.594139 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.594159 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.594177 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:04Z","lastTransitionTime":"2026-02-19T10:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.697680 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.698201 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.698213 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.698236 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.698249 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:04Z","lastTransitionTime":"2026-02-19T10:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.800767 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.800817 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.800829 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.800849 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.800865 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:04Z","lastTransitionTime":"2026-02-19T10:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.904278 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.904352 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.904373 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.904400 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:04 crc kubenswrapper[4811]: I0219 10:10:04.904421 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:04Z","lastTransitionTime":"2026-02-19T10:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.007097 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.007151 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.007168 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.007191 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.007209 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:05Z","lastTransitionTime":"2026-02-19T10:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.109781 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.109852 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.109876 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.109904 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.109929 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:05Z","lastTransitionTime":"2026-02-19T10:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.212848 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.212901 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.212919 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.212943 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.212960 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:05Z","lastTransitionTime":"2026-02-19T10:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.315575 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.315631 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.315644 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.315666 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.315679 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:05Z","lastTransitionTime":"2026-02-19T10:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.418798 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.418849 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.418865 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.418889 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.418907 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:05Z","lastTransitionTime":"2026-02-19T10:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.521858 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.521917 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.521930 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.521951 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.521967 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:05Z","lastTransitionTime":"2026-02-19T10:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.566566 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 10:10:05.191842745 +0000 UTC Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.582877 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.582970 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:10:05 crc kubenswrapper[4811]: E0219 10:10:05.583044 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.583111 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:10:05 crc kubenswrapper[4811]: E0219 10:10:05.583258 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:10:05 crc kubenswrapper[4811]: E0219 10:10:05.583305 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.624789 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.624873 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.624887 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.624903 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.624914 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:05Z","lastTransitionTime":"2026-02-19T10:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.727620 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.727675 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.727686 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.727701 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.727712 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:05Z","lastTransitionTime":"2026-02-19T10:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.831066 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.831110 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.831122 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.831143 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.831158 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:05Z","lastTransitionTime":"2026-02-19T10:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.933932 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.933993 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.934003 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.934022 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:05 crc kubenswrapper[4811]: I0219 10:10:05.934035 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:05Z","lastTransitionTime":"2026-02-19T10:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.037844 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.037899 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.037913 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.037933 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.037945 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:06Z","lastTransitionTime":"2026-02-19T10:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.140930 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.141003 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.141020 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.141051 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.141074 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:06Z","lastTransitionTime":"2026-02-19T10:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.243265 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.243301 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.243311 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.243332 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.243349 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:06Z","lastTransitionTime":"2026-02-19T10:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.345950 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.346001 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.346014 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.346032 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.346045 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:06Z","lastTransitionTime":"2026-02-19T10:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.448568 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.448615 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.448628 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.448645 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.448658 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:06Z","lastTransitionTime":"2026-02-19T10:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.551361 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.551404 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.551415 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.551431 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.551442 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:06Z","lastTransitionTime":"2026-02-19T10:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.567069 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 03:47:24.322320623 +0000 UTC Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.582591 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:10:06 crc kubenswrapper[4811]: E0219 10:10:06.582765 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.654314 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.654389 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.654413 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.654442 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.654495 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:06Z","lastTransitionTime":"2026-02-19T10:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.757167 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.757206 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.757216 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.757231 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.757241 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:06Z","lastTransitionTime":"2026-02-19T10:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.860358 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.860410 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.860421 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.860438 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.860468 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:06Z","lastTransitionTime":"2026-02-19T10:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.964135 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.964183 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.964195 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.964211 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:06 crc kubenswrapper[4811]: I0219 10:10:06.964222 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:06Z","lastTransitionTime":"2026-02-19T10:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.066404 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.066439 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.066484 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.066501 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.066512 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:07Z","lastTransitionTime":"2026-02-19T10:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.168745 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.168793 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.168805 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.168823 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.168838 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:07Z","lastTransitionTime":"2026-02-19T10:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.271639 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.271683 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.271695 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.271712 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.271723 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:07Z","lastTransitionTime":"2026-02-19T10:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.374428 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.374507 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.374517 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.374533 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.374542 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:07Z","lastTransitionTime":"2026-02-19T10:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.404359 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.404416 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.404426 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.404485 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.404502 4811 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T10:10:07Z","lastTransitionTime":"2026-02-19T10:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.461994 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-mszp6"] Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.462688 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mszp6" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.467060 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.467401 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.467664 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.468079 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.528581 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=8.528542051 podStartE2EDuration="8.528542051s" podCreationTimestamp="2026-02-19 10:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:07.52634953 +0000 UTC m=+96.052814216" watchObservedRunningTime="2026-02-19 10:10:07.528542051 +0000 UTC m=+96.055006977" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.563992 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-9s768" podStartSLOduration=72.563976382 podStartE2EDuration="1m12.563976382s" podCreationTimestamp="2026-02-19 10:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:07.563843039 +0000 UTC m=+96.090307735" watchObservedRunningTime="2026-02-19 10:10:07.563976382 +0000 UTC m=+96.090441068" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.567375 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 13:01:25.350669747 +0000 UTC Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.567401 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.578059 4811 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.582514 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:10:07 crc kubenswrapper[4811]: E0219 10:10:07.582619 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.582641 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.582684 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:10:07 crc kubenswrapper[4811]: E0219 10:10:07.582800 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:10:07 crc kubenswrapper[4811]: E0219 10:10:07.582836 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.587667 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-bpzh7" podStartSLOduration=73.587639287 podStartE2EDuration="1m13.587639287s" podCreationTimestamp="2026-02-19 10:08:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:07.57836952 +0000 UTC m=+96.104834206" watchObservedRunningTime="2026-02-19 10:10:07.587639287 +0000 UTC m=+96.114103973" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.601143 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-nw57r" podStartSLOduration=72.601120913 podStartE2EDuration="1m12.601120913s" podCreationTimestamp="2026-02-19 10:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:07.588142779 +0000 UTC m=+96.114607475" watchObservedRunningTime="2026-02-19 10:10:07.601120913 +0000 UTC m=+96.127585599" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.610435 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6067e097-629c-41d1-88b7-cba3ff38420f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-mszp6\" (UID: \"6067e097-629c-41d1-88b7-cba3ff38420f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mszp6" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.610483 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6067e097-629c-41d1-88b7-cba3ff38420f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-mszp6\" (UID: \"6067e097-629c-41d1-88b7-cba3ff38420f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mszp6" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.610513 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6067e097-629c-41d1-88b7-cba3ff38420f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-mszp6\" (UID: \"6067e097-629c-41d1-88b7-cba3ff38420f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mszp6" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.610536 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6067e097-629c-41d1-88b7-cba3ff38420f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-mszp6\" (UID: \"6067e097-629c-41d1-88b7-cba3ff38420f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mszp6" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.610580 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6067e097-629c-41d1-88b7-cba3ff38420f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-mszp6\" (UID: \"6067e097-629c-41d1-88b7-cba3ff38420f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mszp6" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.630553 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=72.630519443 podStartE2EDuration="1m12.630519443s" podCreationTimestamp="2026-02-19 10:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:07.601627865 +0000 UTC m=+96.128092541" watchObservedRunningTime="2026-02-19 10:10:07.630519443 +0000 UTC m=+96.156984129" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.651187 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=40.651163407 podStartE2EDuration="40.651163407s" podCreationTimestamp="2026-02-19 10:09:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:07.650177054 +0000 UTC m=+96.176641740" watchObservedRunningTime="2026-02-19 10:10:07.651163407 +0000 UTC m=+96.177628093" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.651612 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=73.651604897 podStartE2EDuration="1m13.651604897s" podCreationTimestamp="2026-02-19 10:08:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:07.633183145 +0000 UTC m=+96.159647841" watchObservedRunningTime="2026-02-19 10:10:07.651604897 +0000 UTC m=+96.178069593" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.692298 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podStartSLOduration=73.692267011 podStartE2EDuration="1m13.692267011s" podCreationTimestamp="2026-02-19 10:08:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:07.678212971 +0000 UTC m=+96.204677657" watchObservedRunningTime="2026-02-19 10:10:07.692267011 +0000 UTC m=+96.218731687" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.711207 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6067e097-629c-41d1-88b7-cba3ff38420f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-mszp6\" (UID: \"6067e097-629c-41d1-88b7-cba3ff38420f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mszp6" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.711265 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6067e097-629c-41d1-88b7-cba3ff38420f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-mszp6\" (UID: \"6067e097-629c-41d1-88b7-cba3ff38420f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mszp6" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.711330 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6067e097-629c-41d1-88b7-cba3ff38420f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-mszp6\" (UID: \"6067e097-629c-41d1-88b7-cba3ff38420f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mszp6" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.711357 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6067e097-629c-41d1-88b7-cba3ff38420f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-mszp6\" (UID: \"6067e097-629c-41d1-88b7-cba3ff38420f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mszp6" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.711376 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6067e097-629c-41d1-88b7-cba3ff38420f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-mszp6\" (UID: \"6067e097-629c-41d1-88b7-cba3ff38420f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mszp6" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.711439 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6067e097-629c-41d1-88b7-cba3ff38420f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-mszp6\" (UID: \"6067e097-629c-41d1-88b7-cba3ff38420f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mszp6" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.712503 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6067e097-629c-41d1-88b7-cba3ff38420f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-mszp6\" (UID: \"6067e097-629c-41d1-88b7-cba3ff38420f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mszp6" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.712910 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6067e097-629c-41d1-88b7-cba3ff38420f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-mszp6\" (UID: \"6067e097-629c-41d1-88b7-cba3ff38420f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mszp6" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.719426 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6067e097-629c-41d1-88b7-cba3ff38420f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-mszp6\" (UID: \"6067e097-629c-41d1-88b7-cba3ff38420f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mszp6" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.730434 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6067e097-629c-41d1-88b7-cba3ff38420f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-mszp6\" (UID: \"6067e097-629c-41d1-88b7-cba3ff38420f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mszp6" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.757318 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sx75t" podStartSLOduration=71.757291515 podStartE2EDuration="1m11.757291515s" podCreationTimestamp="2026-02-19 10:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:07.756226891 +0000 UTC m=+96.282691587" watchObservedRunningTime="2026-02-19 10:10:07.757291515 +0000 UTC m=+96.283756201" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.783300 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mszp6" Feb 19 10:10:07 crc kubenswrapper[4811]: I0219 10:10:07.791604 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-lxbrg" podStartSLOduration=72.791570999 podStartE2EDuration="1m12.791570999s" podCreationTimestamp="2026-02-19 10:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:07.791357454 +0000 UTC m=+96.317822170" watchObservedRunningTime="2026-02-19 10:10:07.791570999 +0000 UTC m=+96.318035685" Feb 19 10:10:07 crc kubenswrapper[4811]: W0219 10:10:07.805490 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6067e097_629c_41d1_88b7_cba3ff38420f.slice/crio-7e7fecaa29647eca0c539e8812e3fc0a23c274a95695276f050694afcc4e2d15 WatchSource:0}: Error finding container 7e7fecaa29647eca0c539e8812e3fc0a23c274a95695276f050694afcc4e2d15: Status 404 returned error can't find the container with id 7e7fecaa29647eca0c539e8812e3fc0a23c274a95695276f050694afcc4e2d15 Feb 19 10:10:08 crc kubenswrapper[4811]: I0219 10:10:08.326315 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mszp6" event={"ID":"6067e097-629c-41d1-88b7-cba3ff38420f","Type":"ContainerStarted","Data":"fa9ffef416c343b19ad94df9372416efed67955f3ac3e35287bc4fb454164a4e"} Feb 19 10:10:08 crc kubenswrapper[4811]: I0219 10:10:08.326392 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mszp6" event={"ID":"6067e097-629c-41d1-88b7-cba3ff38420f","Type":"ContainerStarted","Data":"7e7fecaa29647eca0c539e8812e3fc0a23c274a95695276f050694afcc4e2d15"} Feb 19 10:10:08 crc kubenswrapper[4811]: I0219 10:10:08.582808 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:10:08 crc kubenswrapper[4811]: E0219 10:10:08.583034 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:10:09 crc kubenswrapper[4811]: I0219 10:10:09.582414 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:10:09 crc kubenswrapper[4811]: I0219 10:10:09.582490 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:10:09 crc kubenswrapper[4811]: E0219 10:10:09.582609 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:10:09 crc kubenswrapper[4811]: I0219 10:10:09.582703 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:10:09 crc kubenswrapper[4811]: E0219 10:10:09.583377 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:10:09 crc kubenswrapper[4811]: E0219 10:10:09.583604 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:10:10 crc kubenswrapper[4811]: I0219 10:10:10.582578 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:10:10 crc kubenswrapper[4811]: E0219 10:10:10.582819 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:10:11 crc kubenswrapper[4811]: I0219 10:10:11.582339 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:10:11 crc kubenswrapper[4811]: I0219 10:10:11.582371 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:10:11 crc kubenswrapper[4811]: I0219 10:10:11.582689 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:10:11 crc kubenswrapper[4811]: E0219 10:10:11.582836 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:10:11 crc kubenswrapper[4811]: E0219 10:10:11.582695 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:10:11 crc kubenswrapper[4811]: E0219 10:10:11.582989 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:10:11 crc kubenswrapper[4811]: I0219 10:10:11.599439 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mszp6" podStartSLOduration=76.599420174 podStartE2EDuration="1m16.599420174s" podCreationTimestamp="2026-02-19 10:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:08.342945279 +0000 UTC m=+96.869409995" watchObservedRunningTime="2026-02-19 10:10:11.599420174 +0000 UTC m=+100.125884860" Feb 19 10:10:11 crc kubenswrapper[4811]: I0219 10:10:11.599624 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 19 10:10:12 crc kubenswrapper[4811]: I0219 10:10:12.582382 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:10:12 crc kubenswrapper[4811]: E0219 10:10:12.583645 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:10:12 crc kubenswrapper[4811]: I0219 10:10:12.617058 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=1.617013517 podStartE2EDuration="1.617013517s" podCreationTimestamp="2026-02-19 10:10:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:12.61587288 +0000 UTC m=+101.142337576" watchObservedRunningTime="2026-02-19 10:10:12.617013517 +0000 UTC m=+101.143478213" Feb 19 10:10:13 crc kubenswrapper[4811]: I0219 10:10:13.582785 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:10:13 crc kubenswrapper[4811]: I0219 10:10:13.582870 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:10:13 crc kubenswrapper[4811]: E0219 10:10:13.583036 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:10:13 crc kubenswrapper[4811]: I0219 10:10:13.583278 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:10:13 crc kubenswrapper[4811]: E0219 10:10:13.583350 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:10:13 crc kubenswrapper[4811]: E0219 10:10:13.583476 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:10:14 crc kubenswrapper[4811]: I0219 10:10:14.583152 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:10:14 crc kubenswrapper[4811]: E0219 10:10:14.583695 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:10:14 crc kubenswrapper[4811]: I0219 10:10:14.584261 4811 scope.go:117] "RemoveContainer" containerID="89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910" Feb 19 10:10:14 crc kubenswrapper[4811]: E0219 10:10:14.584528 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ln785_openshift-ovn-kubernetes(554fedad-397a-4827-b5ad-48a681a7a2e5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" Feb 19 10:10:14 crc kubenswrapper[4811]: I0219 10:10:14.792602 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35d4fd8e-224d-468e-b6e2-a292a6e1949f-metrics-certs\") pod \"network-metrics-daemon-n5z6r\" (UID: \"35d4fd8e-224d-468e-b6e2-a292a6e1949f\") " pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:10:14 crc kubenswrapper[4811]: E0219 10:10:14.792791 4811 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 10:10:14 crc kubenswrapper[4811]: E0219 10:10:14.792851 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35d4fd8e-224d-468e-b6e2-a292a6e1949f-metrics-certs podName:35d4fd8e-224d-468e-b6e2-a292a6e1949f nodeName:}" failed. No retries permitted until 2026-02-19 10:11:18.792835949 +0000 UTC m=+167.319300635 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/35d4fd8e-224d-468e-b6e2-a292a6e1949f-metrics-certs") pod "network-metrics-daemon-n5z6r" (UID: "35d4fd8e-224d-468e-b6e2-a292a6e1949f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 10:10:15 crc kubenswrapper[4811]: I0219 10:10:15.582916 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:10:15 crc kubenswrapper[4811]: I0219 10:10:15.583012 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:10:15 crc kubenswrapper[4811]: I0219 10:10:15.583068 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:10:15 crc kubenswrapper[4811]: E0219 10:10:15.583216 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:10:15 crc kubenswrapper[4811]: E0219 10:10:15.583317 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:10:15 crc kubenswrapper[4811]: E0219 10:10:15.583699 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:10:16 crc kubenswrapper[4811]: I0219 10:10:16.582951 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:10:16 crc kubenswrapper[4811]: E0219 10:10:16.583175 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:10:17 crc kubenswrapper[4811]: I0219 10:10:17.582715 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:10:17 crc kubenswrapper[4811]: I0219 10:10:17.582748 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:10:17 crc kubenswrapper[4811]: I0219 10:10:17.582942 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:10:17 crc kubenswrapper[4811]: E0219 10:10:17.582937 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:10:17 crc kubenswrapper[4811]: E0219 10:10:17.583042 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:10:17 crc kubenswrapper[4811]: E0219 10:10:17.583167 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:10:18 crc kubenswrapper[4811]: I0219 10:10:18.582692 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:10:18 crc kubenswrapper[4811]: E0219 10:10:18.582999 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:10:19 crc kubenswrapper[4811]: I0219 10:10:19.582998 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:10:19 crc kubenswrapper[4811]: I0219 10:10:19.583040 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:10:19 crc kubenswrapper[4811]: I0219 10:10:19.583086 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:10:19 crc kubenswrapper[4811]: E0219 10:10:19.583708 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:10:19 crc kubenswrapper[4811]: E0219 10:10:19.583902 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:10:19 crc kubenswrapper[4811]: E0219 10:10:19.584090 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:10:20 crc kubenswrapper[4811]: I0219 10:10:20.583100 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:10:20 crc kubenswrapper[4811]: E0219 10:10:20.583318 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:10:21 crc kubenswrapper[4811]: I0219 10:10:21.583068 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:10:21 crc kubenswrapper[4811]: I0219 10:10:21.583216 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:10:21 crc kubenswrapper[4811]: I0219 10:10:21.583100 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:10:21 crc kubenswrapper[4811]: E0219 10:10:21.583296 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:10:21 crc kubenswrapper[4811]: E0219 10:10:21.583434 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:10:21 crc kubenswrapper[4811]: E0219 10:10:21.583592 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:10:22 crc kubenswrapper[4811]: I0219 10:10:22.582956 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:10:22 crc kubenswrapper[4811]: E0219 10:10:22.584264 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:10:23 crc kubenswrapper[4811]: I0219 10:10:23.582880 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:10:23 crc kubenswrapper[4811]: I0219 10:10:23.583282 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:10:23 crc kubenswrapper[4811]: E0219 10:10:23.583599 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:10:23 crc kubenswrapper[4811]: I0219 10:10:23.583759 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:10:23 crc kubenswrapper[4811]: E0219 10:10:23.584020 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:10:23 crc kubenswrapper[4811]: E0219 10:10:23.584310 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:10:24 crc kubenswrapper[4811]: I0219 10:10:24.583073 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:10:24 crc kubenswrapper[4811]: E0219 10:10:24.583357 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:10:25 crc kubenswrapper[4811]: I0219 10:10:25.582305 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:10:25 crc kubenswrapper[4811]: I0219 10:10:25.582325 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:10:25 crc kubenswrapper[4811]: E0219 10:10:25.582539 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:10:25 crc kubenswrapper[4811]: E0219 10:10:25.582406 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:10:25 crc kubenswrapper[4811]: I0219 10:10:25.582629 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:10:25 crc kubenswrapper[4811]: E0219 10:10:25.582825 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:10:26 crc kubenswrapper[4811]: I0219 10:10:26.583804 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:10:26 crc kubenswrapper[4811]: E0219 10:10:26.583959 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:10:26 crc kubenswrapper[4811]: I0219 10:10:26.584685 4811 scope.go:117] "RemoveContainer" containerID="89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910" Feb 19 10:10:26 crc kubenswrapper[4811]: E0219 10:10:26.584885 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ln785_openshift-ovn-kubernetes(554fedad-397a-4827-b5ad-48a681a7a2e5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" Feb 19 10:10:27 crc kubenswrapper[4811]: I0219 10:10:27.582842 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:10:27 crc kubenswrapper[4811]: I0219 10:10:27.583034 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:10:27 crc kubenswrapper[4811]: I0219 10:10:27.583096 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:10:27 crc kubenswrapper[4811]: E0219 10:10:27.583205 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:10:27 crc kubenswrapper[4811]: E0219 10:10:27.583341 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:10:27 crc kubenswrapper[4811]: E0219 10:10:27.584324 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:10:28 crc kubenswrapper[4811]: I0219 10:10:28.583390 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:10:28 crc kubenswrapper[4811]: E0219 10:10:28.583671 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:10:29 crc kubenswrapper[4811]: I0219 10:10:29.583129 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:10:29 crc kubenswrapper[4811]: I0219 10:10:29.583217 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:10:29 crc kubenswrapper[4811]: E0219 10:10:29.583338 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:10:29 crc kubenswrapper[4811]: I0219 10:10:29.583427 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:10:29 crc kubenswrapper[4811]: E0219 10:10:29.583633 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:10:29 crc kubenswrapper[4811]: E0219 10:10:29.583704 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:10:30 crc kubenswrapper[4811]: I0219 10:10:30.582771 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:10:30 crc kubenswrapper[4811]: E0219 10:10:30.582948 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:10:31 crc kubenswrapper[4811]: I0219 10:10:31.582994 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:10:31 crc kubenswrapper[4811]: I0219 10:10:31.583057 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:10:31 crc kubenswrapper[4811]: E0219 10:10:31.583117 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:10:31 crc kubenswrapper[4811]: E0219 10:10:31.583219 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:10:31 crc kubenswrapper[4811]: I0219 10:10:31.583016 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:10:31 crc kubenswrapper[4811]: E0219 10:10:31.583292 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:10:32 crc kubenswrapper[4811]: I0219 10:10:32.413351 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lxbrg_9766aa7e-a66b-4efc-a5de-2fe49ef00eb8/kube-multus/1.log" Feb 19 10:10:32 crc kubenswrapper[4811]: I0219 10:10:32.413948 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lxbrg_9766aa7e-a66b-4efc-a5de-2fe49ef00eb8/kube-multus/0.log" Feb 19 10:10:32 crc kubenswrapper[4811]: I0219 10:10:32.414012 4811 generic.go:334] "Generic (PLEG): container finished" podID="9766aa7e-a66b-4efc-a5de-2fe49ef00eb8" containerID="5467aad5459e55d3a7720c8a92bcabb265205bd73be8b21ab2023463f50066cd" exitCode=1 Feb 19 10:10:32 crc kubenswrapper[4811]: I0219 10:10:32.414088 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lxbrg" event={"ID":"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8","Type":"ContainerDied","Data":"5467aad5459e55d3a7720c8a92bcabb265205bd73be8b21ab2023463f50066cd"} Feb 19 10:10:32 crc kubenswrapper[4811]: I0219 10:10:32.414211 4811 scope.go:117] "RemoveContainer" containerID="03ab83dba22012391b9f6dc14d418ab2cb0ad2e22172f0f95bd9b73e962ea0e7" Feb 19 10:10:32 crc kubenswrapper[4811]: I0219 10:10:32.414718 4811 scope.go:117] "RemoveContainer" containerID="5467aad5459e55d3a7720c8a92bcabb265205bd73be8b21ab2023463f50066cd" Feb 19 10:10:32 crc kubenswrapper[4811]: E0219 10:10:32.414927 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-lxbrg_openshift-multus(9766aa7e-a66b-4efc-a5de-2fe49ef00eb8)\"" pod="openshift-multus/multus-lxbrg" podUID="9766aa7e-a66b-4efc-a5de-2fe49ef00eb8" Feb 19 10:10:32 crc kubenswrapper[4811]: I0219 10:10:32.582419 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:10:32 crc kubenswrapper[4811]: E0219 10:10:32.584505 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:10:32 crc kubenswrapper[4811]: E0219 10:10:32.587638 4811 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 19 10:10:32 crc kubenswrapper[4811]: E0219 10:10:32.684506 4811 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 10:10:33 crc kubenswrapper[4811]: I0219 10:10:33.419269 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lxbrg_9766aa7e-a66b-4efc-a5de-2fe49ef00eb8/kube-multus/1.log" Feb 19 10:10:33 crc kubenswrapper[4811]: I0219 10:10:33.582689 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:10:33 crc kubenswrapper[4811]: I0219 10:10:33.582826 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:10:33 crc kubenswrapper[4811]: E0219 10:10:33.582857 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:10:33 crc kubenswrapper[4811]: I0219 10:10:33.582897 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:10:33 crc kubenswrapper[4811]: E0219 10:10:33.583031 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:10:33 crc kubenswrapper[4811]: E0219 10:10:33.583118 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:10:34 crc kubenswrapper[4811]: I0219 10:10:34.583083 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:10:34 crc kubenswrapper[4811]: E0219 10:10:34.583314 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:10:35 crc kubenswrapper[4811]: I0219 10:10:35.582721 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:10:35 crc kubenswrapper[4811]: I0219 10:10:35.582721 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:10:35 crc kubenswrapper[4811]: I0219 10:10:35.582734 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:10:35 crc kubenswrapper[4811]: E0219 10:10:35.582997 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:10:35 crc kubenswrapper[4811]: E0219 10:10:35.583332 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:10:35 crc kubenswrapper[4811]: E0219 10:10:35.584115 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:10:36 crc kubenswrapper[4811]: I0219 10:10:36.583274 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:10:36 crc kubenswrapper[4811]: E0219 10:10:36.583971 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:10:37 crc kubenswrapper[4811]: I0219 10:10:37.583337 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:10:37 crc kubenswrapper[4811]: E0219 10:10:37.583585 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:10:37 crc kubenswrapper[4811]: I0219 10:10:37.583712 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:10:37 crc kubenswrapper[4811]: I0219 10:10:37.583770 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:10:37 crc kubenswrapper[4811]: E0219 10:10:37.583885 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:10:37 crc kubenswrapper[4811]: E0219 10:10:37.583988 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:10:37 crc kubenswrapper[4811]: E0219 10:10:37.685489 4811 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 10:10:38 crc kubenswrapper[4811]: I0219 10:10:38.582267 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:10:38 crc kubenswrapper[4811]: E0219 10:10:38.582422 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:10:39 crc kubenswrapper[4811]: I0219 10:10:39.582887 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:10:39 crc kubenswrapper[4811]: I0219 10:10:39.582912 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:10:39 crc kubenswrapper[4811]: E0219 10:10:39.583058 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:10:39 crc kubenswrapper[4811]: E0219 10:10:39.583359 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:10:39 crc kubenswrapper[4811]: I0219 10:10:39.583665 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:10:39 crc kubenswrapper[4811]: E0219 10:10:39.583818 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:10:40 crc kubenswrapper[4811]: I0219 10:10:40.582381 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:10:40 crc kubenswrapper[4811]: E0219 10:10:40.582973 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:10:40 crc kubenswrapper[4811]: I0219 10:10:40.583121 4811 scope.go:117] "RemoveContainer" containerID="89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910" Feb 19 10:10:41 crc kubenswrapper[4811]: I0219 10:10:41.452180 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ln785_554fedad-397a-4827-b5ad-48a681a7a2e5/ovnkube-controller/3.log" Feb 19 10:10:41 crc kubenswrapper[4811]: I0219 10:10:41.455284 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" event={"ID":"554fedad-397a-4827-b5ad-48a681a7a2e5","Type":"ContainerStarted","Data":"3fbbf3ae488626aae81f663ff9cfe730b383c8e1b2557692987f449a9b9dbbd9"} Feb 19 10:10:41 crc kubenswrapper[4811]: I0219 10:10:41.455885 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:10:41 crc kubenswrapper[4811]: I0219 10:10:41.489191 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" podStartSLOduration=106.489161879 podStartE2EDuration="1m46.489161879s" podCreationTimestamp="2026-02-19 10:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:41.488885652 +0000 UTC m=+130.015350358" watchObservedRunningTime="2026-02-19 10:10:41.489161879 +0000 UTC m=+130.015626565" Feb 19 10:10:41 crc kubenswrapper[4811]: I0219 10:10:41.582940 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:10:41 crc kubenswrapper[4811]: I0219 10:10:41.583037 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:10:41 crc kubenswrapper[4811]: I0219 10:10:41.583038 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:10:41 crc kubenswrapper[4811]: E0219 10:10:41.583541 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:10:41 crc kubenswrapper[4811]: E0219 10:10:41.583578 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:10:41 crc kubenswrapper[4811]: E0219 10:10:41.584098 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:10:41 crc kubenswrapper[4811]: I0219 10:10:41.889111 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-n5z6r"] Feb 19 10:10:41 crc kubenswrapper[4811]: I0219 10:10:41.889296 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:10:41 crc kubenswrapper[4811]: E0219 10:10:41.889403 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:10:42 crc kubenswrapper[4811]: E0219 10:10:42.687531 4811 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 10:10:43 crc kubenswrapper[4811]: I0219 10:10:43.582561 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:10:43 crc kubenswrapper[4811]: I0219 10:10:43.582657 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:10:43 crc kubenswrapper[4811]: I0219 10:10:43.582607 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:10:43 crc kubenswrapper[4811]: I0219 10:10:43.582603 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:10:43 crc kubenswrapper[4811]: E0219 10:10:43.582845 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:10:43 crc kubenswrapper[4811]: E0219 10:10:43.583112 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:10:43 crc kubenswrapper[4811]: E0219 10:10:43.583218 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:10:43 crc kubenswrapper[4811]: E0219 10:10:43.583331 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:10:44 crc kubenswrapper[4811]: I0219 10:10:44.583788 4811 scope.go:117] "RemoveContainer" containerID="5467aad5459e55d3a7720c8a92bcabb265205bd73be8b21ab2023463f50066cd" Feb 19 10:10:45 crc kubenswrapper[4811]: I0219 10:10:45.472662 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lxbrg_9766aa7e-a66b-4efc-a5de-2fe49ef00eb8/kube-multus/1.log" Feb 19 10:10:45 crc kubenswrapper[4811]: I0219 10:10:45.472794 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lxbrg" event={"ID":"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8","Type":"ContainerStarted","Data":"8a4885b0091a928d075b1754c06a54a38c4cec164e586a2e7cb86b25fc90f049"} Feb 19 10:10:45 crc kubenswrapper[4811]: I0219 10:10:45.582664 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:10:45 crc kubenswrapper[4811]: I0219 10:10:45.582695 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:10:45 crc kubenswrapper[4811]: I0219 10:10:45.582761 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:10:45 crc kubenswrapper[4811]: I0219 10:10:45.582794 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:10:45 crc kubenswrapper[4811]: E0219 10:10:45.583413 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:10:45 crc kubenswrapper[4811]: E0219 10:10:45.583525 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:10:45 crc kubenswrapper[4811]: E0219 10:10:45.583624 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:10:45 crc kubenswrapper[4811]: E0219 10:10:45.583686 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.583088 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.583104 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:10:47 crc kubenswrapper[4811]: E0219 10:10:47.583736 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.583179 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:10:47 crc kubenswrapper[4811]: E0219 10:10:47.583920 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.583149 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:10:47 crc kubenswrapper[4811]: E0219 10:10:47.584016 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 10:10:47 crc kubenswrapper[4811]: E0219 10:10:47.584069 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n5z6r" podUID="35d4fd8e-224d-468e-b6e2-a292a6e1949f" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.866839 4811 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.917990 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zrrrs"] Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.918735 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-24tbz"] Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.919120 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-24tbz" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.919490 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-8j7vw"] Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.919697 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zrrrs" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.920208 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8j7vw" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.923371 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-nx5d7"] Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.923979 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bf2qf"] Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.924503 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-bf2qf" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.924737 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.924950 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nx5d7" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.926050 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.926769 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-dh45w"] Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.927471 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-dh45w" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.929759 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-t44xg"] Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.930416 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-t44xg" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.932869 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.933076 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.934780 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-d8j77"] Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.935504 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-d8j77" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.938977 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kpjc5"] Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.939844 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kpjc5" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.951499 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.951527 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.956157 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.963924 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.964224 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.964401 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.964613 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.964610 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.965022 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.967129 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6vsq7"] Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.967947 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6vsq7" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.973075 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.973895 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.974753 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.974965 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.975678 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.975910 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.976215 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.976537 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.977245 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.977396 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.977500 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.977540 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.977836 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.977976 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.978099 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.978150 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.978230 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.978307 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.978495 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.978624 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.975927 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.978688 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.978738 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-2jmvs"] Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.978236 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.979367 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.979578 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2jmvs" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.979657 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-bsbln"] Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.979588 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.979747 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.979675 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.980215 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-bsbln" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.981268 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.981545 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.981589 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vxgv9"] Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.981666 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.982253 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vxgv9" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.982595 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.982890 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.983294 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.983609 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.983953 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.984094 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.984228 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.984391 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.984574 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.985408 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.985574 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zrd6d"] Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.986227 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-zrd6d" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.986810 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-b5p7b"] Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.987430 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.987966 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-87n5j"] Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.989662 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-87n5j" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.989826 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zhzk5"] Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.990079 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.990402 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zhzk5" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.990793 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8wjls"] Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.991291 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.991891 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zrrrs"] Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.994496 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr9m9\" (UniqueName: \"kubernetes.io/projected/47f24fcb-a536-4173-b099-9fadf4be99c6-kube-api-access-xr9m9\") pod \"downloads-7954f5f757-t44xg\" (UID: \"47f24fcb-a536-4173-b099-9fadf4be99c6\") " pod="openshift-console/downloads-7954f5f757-t44xg" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.994580 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2fce0222-ff8a-4a92-8c75-bdbd44f509d5-client-ca\") pod \"controller-manager-879f6c89f-24tbz\" (UID: \"2fce0222-ff8a-4a92-8c75-bdbd44f509d5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-24tbz" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.994611 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2fce0222-ff8a-4a92-8c75-bdbd44f509d5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-24tbz\" (UID: \"2fce0222-ff8a-4a92-8c75-bdbd44f509d5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-24tbz" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.994657 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/94d45853-c52b-46a4-b27d-b4d33b7527d9-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-nx5d7\" (UID: \"94d45853-c52b-46a4-b27d-b4d33b7527d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nx5d7" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.994680 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2fvs\" (UniqueName: \"kubernetes.io/projected/0c629b88-bf23-410a-903e-ae58dffc3bad-kube-api-access-j2fvs\") pod \"console-operator-58897d9998-d8j77\" (UID: \"0c629b88-bf23-410a-903e-ae58dffc3bad\") " pod="openshift-console-operator/console-operator-58897d9998-d8j77" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.994727 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/889906af-3050-48fa-816a-7f0631f0ce47-available-featuregates\") pod \"openshift-config-operator-7777fb866f-8j7vw\" (UID: \"889906af-3050-48fa-816a-7f0631f0ce47\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8j7vw" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.994754 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/de01cd94-d3c6-4fa5-9033-9a65a318ab82-console-oauth-config\") pod \"console-f9d7485db-dh45w\" (UID: \"de01cd94-d3c6-4fa5-9033-9a65a318ab82\") " pod="openshift-console/console-f9d7485db-dh45w" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.994803 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/de01cd94-d3c6-4fa5-9033-9a65a318ab82-console-config\") pod \"console-f9d7485db-dh45w\" (UID: \"de01cd94-d3c6-4fa5-9033-9a65a318ab82\") " pod="openshift-console/console-f9d7485db-dh45w" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.994833 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94d45853-c52b-46a4-b27d-b4d33b7527d9-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-nx5d7\" (UID: \"94d45853-c52b-46a4-b27d-b4d33b7527d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nx5d7" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.994897 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/de01cd94-d3c6-4fa5-9033-9a65a318ab82-service-ca\") pod \"console-f9d7485db-dh45w\" (UID: \"de01cd94-d3c6-4fa5-9033-9a65a318ab82\") " pod="openshift-console/console-f9d7485db-dh45w" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.994923 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/077abdec-a39e-43d3-bbba-8dd0afbb448b-images\") pod \"machine-api-operator-5694c8668f-bf2qf\" (UID: \"077abdec-a39e-43d3-bbba-8dd0afbb448b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bf2qf" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.994966 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de01cd94-d3c6-4fa5-9033-9a65a318ab82-trusted-ca-bundle\") pod \"console-f9d7485db-dh45w\" (UID: \"de01cd94-d3c6-4fa5-9033-9a65a318ab82\") " pod="openshift-console/console-f9d7485db-dh45w" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.995007 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/94d45853-c52b-46a4-b27d-b4d33b7527d9-audit-dir\") pod \"apiserver-7bbb656c7d-nx5d7\" (UID: \"94d45853-c52b-46a4-b27d-b4d33b7527d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nx5d7" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.995047 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fce0222-ff8a-4a92-8c75-bdbd44f509d5-config\") pod \"controller-manager-879f6c89f-24tbz\" (UID: \"2fce0222-ff8a-4a92-8c75-bdbd44f509d5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-24tbz" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.995072 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/077abdec-a39e-43d3-bbba-8dd0afbb448b-config\") pod \"machine-api-operator-5694c8668f-bf2qf\" (UID: \"077abdec-a39e-43d3-bbba-8dd0afbb448b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bf2qf" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.995122 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5kxc\" (UniqueName: \"kubernetes.io/projected/37172e17-0240-46d9-878a-f396d66ea307-kube-api-access-r5kxc\") pod \"cluster-samples-operator-665b6dd947-zrrrs\" (UID: \"37172e17-0240-46d9-878a-f396d66ea307\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zrrrs" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.995204 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94d45853-c52b-46a4-b27d-b4d33b7527d9-serving-cert\") pod \"apiserver-7bbb656c7d-nx5d7\" (UID: \"94d45853-c52b-46a4-b27d-b4d33b7527d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nx5d7" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.995232 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/077abdec-a39e-43d3-bbba-8dd0afbb448b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bf2qf\" (UID: \"077abdec-a39e-43d3-bbba-8dd0afbb448b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bf2qf" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.995259 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qtfv\" (UniqueName: \"kubernetes.io/projected/94d45853-c52b-46a4-b27d-b4d33b7527d9-kube-api-access-6qtfv\") pod \"apiserver-7bbb656c7d-nx5d7\" (UID: \"94d45853-c52b-46a4-b27d-b4d33b7527d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nx5d7" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.995305 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/37172e17-0240-46d9-878a-f396d66ea307-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-zrrrs\" (UID: \"37172e17-0240-46d9-878a-f396d66ea307\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zrrrs" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.995328 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/de01cd94-d3c6-4fa5-9033-9a65a318ab82-oauth-serving-cert\") pod \"console-f9d7485db-dh45w\" (UID: \"de01cd94-d3c6-4fa5-9033-9a65a318ab82\") " pod="openshift-console/console-f9d7485db-dh45w" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.995371 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/94d45853-c52b-46a4-b27d-b4d33b7527d9-audit-policies\") pod \"apiserver-7bbb656c7d-nx5d7\" (UID: \"94d45853-c52b-46a4-b27d-b4d33b7527d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nx5d7" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.995398 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt8bx\" (UniqueName: \"kubernetes.io/projected/2fce0222-ff8a-4a92-8c75-bdbd44f509d5-kube-api-access-dt8bx\") pod \"controller-manager-879f6c89f-24tbz\" (UID: \"2fce0222-ff8a-4a92-8c75-bdbd44f509d5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-24tbz" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.995442 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c629b88-bf23-410a-903e-ae58dffc3bad-config\") pod \"console-operator-58897d9998-d8j77\" (UID: \"0c629b88-bf23-410a-903e-ae58dffc3bad\") " pod="openshift-console-operator/console-operator-58897d9998-d8j77" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.995483 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c629b88-bf23-410a-903e-ae58dffc3bad-serving-cert\") pod \"console-operator-58897d9998-d8j77\" (UID: \"0c629b88-bf23-410a-903e-ae58dffc3bad\") " pod="openshift-console-operator/console-operator-58897d9998-d8j77" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.995535 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6gfn\" (UniqueName: \"kubernetes.io/projected/889906af-3050-48fa-816a-7f0631f0ce47-kube-api-access-h6gfn\") pod \"openshift-config-operator-7777fb866f-8j7vw\" (UID: \"889906af-3050-48fa-816a-7f0631f0ce47\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8j7vw" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.995582 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kttl5\" (UniqueName: \"kubernetes.io/projected/de01cd94-d3c6-4fa5-9033-9a65a318ab82-kube-api-access-kttl5\") pod \"console-f9d7485db-dh45w\" (UID: \"de01cd94-d3c6-4fa5-9033-9a65a318ab82\") " pod="openshift-console/console-f9d7485db-dh45w" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.995631 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0c629b88-bf23-410a-903e-ae58dffc3bad-trusted-ca\") pod \"console-operator-58897d9998-d8j77\" (UID: \"0c629b88-bf23-410a-903e-ae58dffc3bad\") " pod="openshift-console-operator/console-operator-58897d9998-d8j77" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.995657 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/de01cd94-d3c6-4fa5-9033-9a65a318ab82-console-serving-cert\") pod \"console-f9d7485db-dh45w\" (UID: \"de01cd94-d3c6-4fa5-9033-9a65a318ab82\") " pod="openshift-console/console-f9d7485db-dh45w" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.995701 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/94d45853-c52b-46a4-b27d-b4d33b7527d9-etcd-client\") pod \"apiserver-7bbb656c7d-nx5d7\" (UID: \"94d45853-c52b-46a4-b27d-b4d33b7527d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nx5d7" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.995724 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/889906af-3050-48fa-816a-7f0631f0ce47-serving-cert\") pod \"openshift-config-operator-7777fb866f-8j7vw\" (UID: \"889906af-3050-48fa-816a-7f0631f0ce47\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8j7vw" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.995745 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fce0222-ff8a-4a92-8c75-bdbd44f509d5-serving-cert\") pod \"controller-manager-879f6c89f-24tbz\" (UID: \"2fce0222-ff8a-4a92-8c75-bdbd44f509d5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-24tbz" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.995789 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/94d45853-c52b-46a4-b27d-b4d33b7527d9-encryption-config\") pod \"apiserver-7bbb656c7d-nx5d7\" (UID: \"94d45853-c52b-46a4-b27d-b4d33b7527d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nx5d7" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.995813 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhr78\" (UniqueName: \"kubernetes.io/projected/077abdec-a39e-43d3-bbba-8dd0afbb448b-kube-api-access-bhr78\") pod \"machine-api-operator-5694c8668f-bf2qf\" (UID: \"077abdec-a39e-43d3-bbba-8dd0afbb448b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bf2qf" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.996625 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-xgkqk"] Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.996885 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.997374 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-blkmf"] Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.997770 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-blkmf" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.997886 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xgkqk" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.997964 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.998616 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-p4kkk"] Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.998839 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.998982 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:47 crc kubenswrapper[4811]: I0219 10:10:47.999530 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bbw9j"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.000065 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bbw9j" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.002152 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s595q"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.003029 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s595q" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.005089 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.011959 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.012242 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.012306 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.028362 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.030862 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.031057 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.031839 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.032292 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.032636 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.032799 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.038975 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.039770 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.039863 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.040027 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.039781 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.040108 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.061585 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.061877 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.062203 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.062483 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.062851 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.063040 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.063323 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.063582 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.063643 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mxhh7"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.063853 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.064091 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.064255 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.064768 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.065043 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mxhh7" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.065073 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.065400 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.065501 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.065604 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.065724 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.066745 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.067129 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-84dtz"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.068023 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-84dtz" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.068800 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.070234 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.070389 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.072049 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.072171 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.072273 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.073588 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.075690 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.078120 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.080522 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-w5ffb"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.086411 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f2zf8"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.087130 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.088282 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w5ffb" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.090345 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-569hh"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.094049 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f2zf8" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.094857 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-dz4cb"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.095250 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-dz4cb" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.095435 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-569hh" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.097813 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94d45853-c52b-46a4-b27d-b4d33b7527d9-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-nx5d7\" (UID: \"94d45853-c52b-46a4-b27d-b4d33b7527d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nx5d7" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.097845 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/de01cd94-d3c6-4fa5-9033-9a65a318ab82-service-ca\") pod \"console-f9d7485db-dh45w\" (UID: \"de01cd94-d3c6-4fa5-9033-9a65a318ab82\") " pod="openshift-console/console-f9d7485db-dh45w" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.097873 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d25a9c9a-efcd-4520-9f14-f0de4d9ed485-auth-proxy-config\") pod \"machine-approver-56656f9798-2jmvs\" (UID: \"d25a9c9a-efcd-4520-9f14-f0de4d9ed485\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2jmvs" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.097902 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/077abdec-a39e-43d3-bbba-8dd0afbb448b-images\") pod \"machine-api-operator-5694c8668f-bf2qf\" (UID: \"077abdec-a39e-43d3-bbba-8dd0afbb448b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bf2qf" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.097920 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de01cd94-d3c6-4fa5-9033-9a65a318ab82-trusted-ca-bundle\") pod \"console-f9d7485db-dh45w\" (UID: \"de01cd94-d3c6-4fa5-9033-9a65a318ab82\") " pod="openshift-console/console-f9d7485db-dh45w" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.097938 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19195ba-d354-4a69-bc60-364ba77eead8-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-bbw9j\" (UID: \"d19195ba-d354-4a69-bc60-364ba77eead8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bbw9j" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.097965 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/94d45853-c52b-46a4-b27d-b4d33b7527d9-audit-dir\") pod \"apiserver-7bbb656c7d-nx5d7\" (UID: \"94d45853-c52b-46a4-b27d-b4d33b7527d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nx5d7" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.097987 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fce0222-ff8a-4a92-8c75-bdbd44f509d5-config\") pod \"controller-manager-879f6c89f-24tbz\" (UID: \"2fce0222-ff8a-4a92-8c75-bdbd44f509d5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-24tbz" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.098007 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/25c5713b-847b-4f3f-9a76-2ded118334b3-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-zhzk5\" (UID: \"25c5713b-847b-4f3f-9a76-2ded118334b3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zhzk5" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.098026 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/077abdec-a39e-43d3-bbba-8dd0afbb448b-config\") pod \"machine-api-operator-5694c8668f-bf2qf\" (UID: \"077abdec-a39e-43d3-bbba-8dd0afbb448b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bf2qf" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.098044 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27e6ed8e-63f7-43a4-b162-3edc3c30c442-serving-cert\") pod \"authentication-operator-69f744f599-87n5j\" (UID: \"27e6ed8e-63f7-43a4-b162-3edc3c30c442\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-87n5j" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.098071 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94d45853-c52b-46a4-b27d-b4d33b7527d9-serving-cert\") pod \"apiserver-7bbb656c7d-nx5d7\" (UID: \"94d45853-c52b-46a4-b27d-b4d33b7527d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nx5d7" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.098096 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/077abdec-a39e-43d3-bbba-8dd0afbb448b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bf2qf\" (UID: \"077abdec-a39e-43d3-bbba-8dd0afbb448b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bf2qf" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.098120 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5kxc\" (UniqueName: \"kubernetes.io/projected/37172e17-0240-46d9-878a-f396d66ea307-kube-api-access-r5kxc\") pod \"cluster-samples-operator-665b6dd947-zrrrs\" (UID: \"37172e17-0240-46d9-878a-f396d66ea307\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zrrrs" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.098143 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfb2l\" (UniqueName: \"kubernetes.io/projected/27e6ed8e-63f7-43a4-b162-3edc3c30c442-kube-api-access-jfb2l\") pod \"authentication-operator-69f744f599-87n5j\" (UID: \"27e6ed8e-63f7-43a4-b162-3edc3c30c442\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-87n5j" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.098168 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qtfv\" (UniqueName: \"kubernetes.io/projected/94d45853-c52b-46a4-b27d-b4d33b7527d9-kube-api-access-6qtfv\") pod \"apiserver-7bbb656c7d-nx5d7\" (UID: \"94d45853-c52b-46a4-b27d-b4d33b7527d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nx5d7" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.098185 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/37172e17-0240-46d9-878a-f396d66ea307-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-zrrrs\" (UID: \"37172e17-0240-46d9-878a-f396d66ea307\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zrrrs" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.098206 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/de01cd94-d3c6-4fa5-9033-9a65a318ab82-oauth-serving-cert\") pod \"console-f9d7485db-dh45w\" (UID: \"de01cd94-d3c6-4fa5-9033-9a65a318ab82\") " pod="openshift-console/console-f9d7485db-dh45w" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.098226 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/94d45853-c52b-46a4-b27d-b4d33b7527d9-audit-policies\") pod \"apiserver-7bbb656c7d-nx5d7\" (UID: \"94d45853-c52b-46a4-b27d-b4d33b7527d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nx5d7" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.098257 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b2a3a7d-590d-410c-a4b7-6e7a7729fc3d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vxgv9\" (UID: \"0b2a3a7d-590d-410c-a4b7-6e7a7729fc3d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vxgv9" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.098280 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e361b2e-edcc-418f-861e-83b65fc2634a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6vsq7\" (UID: \"5e361b2e-edcc-418f-861e-83b65fc2634a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6vsq7" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.098299 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e93440f-9668-477a-8df8-a8da4d088cc8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-blkmf\" (UID: \"5e93440f-9668-477a-8df8-a8da4d088cc8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-blkmf" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.098319 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27e6ed8e-63f7-43a4-b162-3edc3c30c442-config\") pod \"authentication-operator-69f744f599-87n5j\" (UID: \"27e6ed8e-63f7-43a4-b162-3edc3c30c442\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-87n5j" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.098342 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt8bx\" (UniqueName: \"kubernetes.io/projected/2fce0222-ff8a-4a92-8c75-bdbd44f509d5-kube-api-access-dt8bx\") pod \"controller-manager-879f6c89f-24tbz\" (UID: \"2fce0222-ff8a-4a92-8c75-bdbd44f509d5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-24tbz" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.098362 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19195ba-d354-4a69-bc60-364ba77eead8-config\") pod \"kube-controller-manager-operator-78b949d7b-bbw9j\" (UID: \"d19195ba-d354-4a69-bc60-364ba77eead8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bbw9j" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.098388 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgzgk\" (UniqueName: \"kubernetes.io/projected/92e65cdf-ee57-4aed-91ca-644ebdc03a27-kube-api-access-wgzgk\") pod \"ingress-operator-5b745b69d9-xgkqk\" (UID: \"92e65cdf-ee57-4aed-91ca-644ebdc03a27\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xgkqk" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.098412 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c629b88-bf23-410a-903e-ae58dffc3bad-config\") pod \"console-operator-58897d9998-d8j77\" (UID: \"0c629b88-bf23-410a-903e-ae58dffc3bad\") " pod="openshift-console-operator/console-operator-58897d9998-d8j77" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.098430 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c629b88-bf23-410a-903e-ae58dffc3bad-serving-cert\") pod \"console-operator-58897d9998-d8j77\" (UID: \"0c629b88-bf23-410a-903e-ae58dffc3bad\") " pod="openshift-console-operator/console-operator-58897d9998-d8j77" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.098470 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6gfn\" (UniqueName: \"kubernetes.io/projected/889906af-3050-48fa-816a-7f0631f0ce47-kube-api-access-h6gfn\") pod \"openshift-config-operator-7777fb866f-8j7vw\" (UID: \"889906af-3050-48fa-816a-7f0631f0ce47\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8j7vw" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.098491 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e93440f-9668-477a-8df8-a8da4d088cc8-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-blkmf\" (UID: \"5e93440f-9668-477a-8df8-a8da4d088cc8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-blkmf" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.098508 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a6532e9-cca3-47cf-89e4-77fac53a5d18-serving-cert\") pod \"route-controller-manager-6576b87f9c-kpjc5\" (UID: \"9a6532e9-cca3-47cf-89e4-77fac53a5d18\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kpjc5" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.098538 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27e6ed8e-63f7-43a4-b162-3edc3c30c442-service-ca-bundle\") pod \"authentication-operator-69f744f599-87n5j\" (UID: \"27e6ed8e-63f7-43a4-b162-3edc3c30c442\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-87n5j" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.098673 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kttl5\" (UniqueName: \"kubernetes.io/projected/de01cd94-d3c6-4fa5-9033-9a65a318ab82-kube-api-access-kttl5\") pod \"console-f9d7485db-dh45w\" (UID: \"de01cd94-d3c6-4fa5-9033-9a65a318ab82\") " pod="openshift-console/console-f9d7485db-dh45w" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.098745 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0c629b88-bf23-410a-903e-ae58dffc3bad-trusted-ca\") pod \"console-operator-58897d9998-d8j77\" (UID: \"0c629b88-bf23-410a-903e-ae58dffc3bad\") " pod="openshift-console-operator/console-operator-58897d9998-d8j77" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.098780 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e361b2e-edcc-418f-861e-83b65fc2634a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6vsq7\" (UID: \"5e361b2e-edcc-418f-861e-83b65fc2634a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6vsq7" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.098812 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fltlx\" (UniqueName: \"kubernetes.io/projected/5e361b2e-edcc-418f-861e-83b65fc2634a-kube-api-access-fltlx\") pod \"openshift-controller-manager-operator-756b6f6bc6-6vsq7\" (UID: \"5e361b2e-edcc-418f-861e-83b65fc2634a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6vsq7" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.098834 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a6532e9-cca3-47cf-89e4-77fac53a5d18-client-ca\") pod \"route-controller-manager-6576b87f9c-kpjc5\" (UID: \"9a6532e9-cca3-47cf-89e4-77fac53a5d18\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kpjc5" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.098863 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/de01cd94-d3c6-4fa5-9033-9a65a318ab82-console-serving-cert\") pod \"console-f9d7485db-dh45w\" (UID: \"de01cd94-d3c6-4fa5-9033-9a65a318ab82\") " pod="openshift-console/console-f9d7485db-dh45w" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.098886 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hbdr\" (UniqueName: \"kubernetes.io/projected/d25a9c9a-efcd-4520-9f14-f0de4d9ed485-kube-api-access-4hbdr\") pod \"machine-approver-56656f9798-2jmvs\" (UID: \"d25a9c9a-efcd-4520-9f14-f0de4d9ed485\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2jmvs" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.098919 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e93440f-9668-477a-8df8-a8da4d088cc8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-blkmf\" (UID: \"5e93440f-9668-477a-8df8-a8da4d088cc8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-blkmf" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.098942 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/92e65cdf-ee57-4aed-91ca-644ebdc03a27-bound-sa-token\") pod \"ingress-operator-5b745b69d9-xgkqk\" (UID: \"92e65cdf-ee57-4aed-91ca-644ebdc03a27\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xgkqk" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.098966 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc7f9\" (UniqueName: \"kubernetes.io/projected/9a6532e9-cca3-47cf-89e4-77fac53a5d18-kube-api-access-sc7f9\") pod \"route-controller-manager-6576b87f9c-kpjc5\" (UID: \"9a6532e9-cca3-47cf-89e4-77fac53a5d18\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kpjc5" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.099029 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/94d45853-c52b-46a4-b27d-b4d33b7527d9-etcd-client\") pod \"apiserver-7bbb656c7d-nx5d7\" (UID: \"94d45853-c52b-46a4-b27d-b4d33b7527d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nx5d7" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.099061 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/889906af-3050-48fa-816a-7f0631f0ce47-serving-cert\") pod \"openshift-config-operator-7777fb866f-8j7vw\" (UID: \"889906af-3050-48fa-816a-7f0631f0ce47\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8j7vw" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.099089 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92e65cdf-ee57-4aed-91ca-644ebdc03a27-metrics-tls\") pod \"ingress-operator-5b745b69d9-xgkqk\" (UID: \"92e65cdf-ee57-4aed-91ca-644ebdc03a27\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xgkqk" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.099122 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fce0222-ff8a-4a92-8c75-bdbd44f509d5-serving-cert\") pod \"controller-manager-879f6c89f-24tbz\" (UID: \"2fce0222-ff8a-4a92-8c75-bdbd44f509d5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-24tbz" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.099147 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b2a3a7d-590d-410c-a4b7-6e7a7729fc3d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vxgv9\" (UID: \"0b2a3a7d-590d-410c-a4b7-6e7a7729fc3d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vxgv9" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.099178 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/94d45853-c52b-46a4-b27d-b4d33b7527d9-encryption-config\") pod \"apiserver-7bbb656c7d-nx5d7\" (UID: \"94d45853-c52b-46a4-b27d-b4d33b7527d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nx5d7" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.099201 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhr78\" (UniqueName: \"kubernetes.io/projected/077abdec-a39e-43d3-bbba-8dd0afbb448b-kube-api-access-bhr78\") pod \"machine-api-operator-5694c8668f-bf2qf\" (UID: \"077abdec-a39e-43d3-bbba-8dd0afbb448b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bf2qf" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.099222 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/65264825-0c26-4bac-bac5-05966eddd111-srv-cert\") pod \"catalog-operator-68c6474976-s595q\" (UID: \"65264825-0c26-4bac-bac5-05966eddd111\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s595q" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.099241 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/65264825-0c26-4bac-bac5-05966eddd111-profile-collector-cert\") pod \"catalog-operator-68c6474976-s595q\" (UID: \"65264825-0c26-4bac-bac5-05966eddd111\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s595q" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.099271 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d25a9c9a-efcd-4520-9f14-f0de4d9ed485-config\") pod \"machine-approver-56656f9798-2jmvs\" (UID: \"d25a9c9a-efcd-4520-9f14-f0de4d9ed485\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2jmvs" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.099290 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/25c5713b-847b-4f3f-9a76-2ded118334b3-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-zhzk5\" (UID: \"25c5713b-847b-4f3f-9a76-2ded118334b3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zhzk5" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.099307 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flch4\" (UniqueName: \"kubernetes.io/projected/25c5713b-847b-4f3f-9a76-2ded118334b3-kube-api-access-flch4\") pod \"cluster-image-registry-operator-dc59b4c8b-zhzk5\" (UID: \"25c5713b-847b-4f3f-9a76-2ded118334b3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zhzk5" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.099335 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d25a9c9a-efcd-4520-9f14-f0de4d9ed485-machine-approver-tls\") pod \"machine-approver-56656f9798-2jmvs\" (UID: \"d25a9c9a-efcd-4520-9f14-f0de4d9ed485\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2jmvs" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.099378 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2fce0222-ff8a-4a92-8c75-bdbd44f509d5-client-ca\") pod \"controller-manager-879f6c89f-24tbz\" (UID: \"2fce0222-ff8a-4a92-8c75-bdbd44f509d5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-24tbz" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.099399 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2fce0222-ff8a-4a92-8c75-bdbd44f509d5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-24tbz\" (UID: \"2fce0222-ff8a-4a92-8c75-bdbd44f509d5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-24tbz" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.099421 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a6532e9-cca3-47cf-89e4-77fac53a5d18-config\") pod \"route-controller-manager-6576b87f9c-kpjc5\" (UID: \"9a6532e9-cca3-47cf-89e4-77fac53a5d18\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kpjc5" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.099439 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brfgq\" (UniqueName: \"kubernetes.io/projected/0b2a3a7d-590d-410c-a4b7-6e7a7729fc3d-kube-api-access-brfgq\") pod \"openshift-apiserver-operator-796bbdcf4f-vxgv9\" (UID: \"0b2a3a7d-590d-410c-a4b7-6e7a7729fc3d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vxgv9" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.099476 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-667qb\" (UniqueName: \"kubernetes.io/projected/65264825-0c26-4bac-bac5-05966eddd111-kube-api-access-667qb\") pod \"catalog-operator-68c6474976-s595q\" (UID: \"65264825-0c26-4bac-bac5-05966eddd111\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s595q" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.099498 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr9m9\" (UniqueName: \"kubernetes.io/projected/47f24fcb-a536-4173-b099-9fadf4be99c6-kube-api-access-xr9m9\") pod \"downloads-7954f5f757-t44xg\" (UID: \"47f24fcb-a536-4173-b099-9fadf4be99c6\") " pod="openshift-console/downloads-7954f5f757-t44xg" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.099557 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/92e65cdf-ee57-4aed-91ca-644ebdc03a27-trusted-ca\") pod \"ingress-operator-5b745b69d9-xgkqk\" (UID: \"92e65cdf-ee57-4aed-91ca-644ebdc03a27\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xgkqk" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.099581 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/94d45853-c52b-46a4-b27d-b4d33b7527d9-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-nx5d7\" (UID: \"94d45853-c52b-46a4-b27d-b4d33b7527d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nx5d7" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.099616 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2fvs\" (UniqueName: \"kubernetes.io/projected/0c629b88-bf23-410a-903e-ae58dffc3bad-kube-api-access-j2fvs\") pod \"console-operator-58897d9998-d8j77\" (UID: \"0c629b88-bf23-410a-903e-ae58dffc3bad\") " pod="openshift-console-operator/console-operator-58897d9998-d8j77" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.099638 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/889906af-3050-48fa-816a-7f0631f0ce47-available-featuregates\") pod \"openshift-config-operator-7777fb866f-8j7vw\" (UID: \"889906af-3050-48fa-816a-7f0631f0ce47\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8j7vw" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.099657 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/de01cd94-d3c6-4fa5-9033-9a65a318ab82-console-oauth-config\") pod \"console-f9d7485db-dh45w\" (UID: \"de01cd94-d3c6-4fa5-9033-9a65a318ab82\") " pod="openshift-console/console-f9d7485db-dh45w" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.099680 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27e6ed8e-63f7-43a4-b162-3edc3c30c442-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-87n5j\" (UID: \"27e6ed8e-63f7-43a4-b162-3edc3c30c442\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-87n5j" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.099713 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/de01cd94-d3c6-4fa5-9033-9a65a318ab82-console-config\") pod \"console-f9d7485db-dh45w\" (UID: \"de01cd94-d3c6-4fa5-9033-9a65a318ab82\") " pod="openshift-console/console-f9d7485db-dh45w" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.099735 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d19195ba-d354-4a69-bc60-364ba77eead8-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-bbw9j\" (UID: \"d19195ba-d354-4a69-bc60-364ba77eead8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bbw9j" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.099756 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/25c5713b-847b-4f3f-9a76-2ded118334b3-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-zhzk5\" (UID: \"25c5713b-847b-4f3f-9a76-2ded118334b3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zhzk5" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.100375 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-cmg4v"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.100961 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94d45853-c52b-46a4-b27d-b4d33b7527d9-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-nx5d7\" (UID: \"94d45853-c52b-46a4-b27d-b4d33b7527d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nx5d7" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.101577 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-cmg4v" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.101760 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/94d45853-c52b-46a4-b27d-b4d33b7527d9-audit-policies\") pod \"apiserver-7bbb656c7d-nx5d7\" (UID: \"94d45853-c52b-46a4-b27d-b4d33b7527d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nx5d7" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.102374 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524920-kwjxq"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.102605 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2fce0222-ff8a-4a92-8c75-bdbd44f509d5-client-ca\") pod \"controller-manager-879f6c89f-24tbz\" (UID: \"2fce0222-ff8a-4a92-8c75-bdbd44f509d5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-24tbz" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.103058 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-kwjxq" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.103234 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/94d45853-c52b-46a4-b27d-b4d33b7527d9-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-nx5d7\" (UID: \"94d45853-c52b-46a4-b27d-b4d33b7527d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nx5d7" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.103433 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/de01cd94-d3c6-4fa5-9033-9a65a318ab82-service-ca\") pod \"console-f9d7485db-dh45w\" (UID: \"de01cd94-d3c6-4fa5-9033-9a65a318ab82\") " pod="openshift-console/console-f9d7485db-dh45w" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.104118 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c629b88-bf23-410a-903e-ae58dffc3bad-config\") pod \"console-operator-58897d9998-d8j77\" (UID: \"0c629b88-bf23-410a-903e-ae58dffc3bad\") " pod="openshift-console-operator/console-operator-58897d9998-d8j77" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.104513 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/889906af-3050-48fa-816a-7f0631f0ce47-available-featuregates\") pod \"openshift-config-operator-7777fb866f-8j7vw\" (UID: \"889906af-3050-48fa-816a-7f0631f0ce47\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8j7vw" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.104643 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-r462p"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.105242 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2fce0222-ff8a-4a92-8c75-bdbd44f509d5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-24tbz\" (UID: \"2fce0222-ff8a-4a92-8c75-bdbd44f509d5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-24tbz" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.108468 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/de01cd94-d3c6-4fa5-9033-9a65a318ab82-oauth-serving-cert\") pod \"console-f9d7485db-dh45w\" (UID: \"de01cd94-d3c6-4fa5-9033-9a65a318ab82\") " pod="openshift-console/console-f9d7485db-dh45w" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.109466 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/077abdec-a39e-43d3-bbba-8dd0afbb448b-images\") pod \"machine-api-operator-5694c8668f-bf2qf\" (UID: \"077abdec-a39e-43d3-bbba-8dd0afbb448b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bf2qf" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.110466 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/de01cd94-d3c6-4fa5-9033-9a65a318ab82-console-oauth-config\") pod \"console-f9d7485db-dh45w\" (UID: \"de01cd94-d3c6-4fa5-9033-9a65a318ab82\") " pod="openshift-console/console-f9d7485db-dh45w" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.110588 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/94d45853-c52b-46a4-b27d-b4d33b7527d9-audit-dir\") pod \"apiserver-7bbb656c7d-nx5d7\" (UID: \"94d45853-c52b-46a4-b27d-b4d33b7527d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nx5d7" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.110553 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de01cd94-d3c6-4fa5-9033-9a65a318ab82-trusted-ca-bundle\") pod \"console-f9d7485db-dh45w\" (UID: \"de01cd94-d3c6-4fa5-9033-9a65a318ab82\") " pod="openshift-console/console-f9d7485db-dh45w" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.111406 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0c629b88-bf23-410a-903e-ae58dffc3bad-trusted-ca\") pod \"console-operator-58897d9998-d8j77\" (UID: \"0c629b88-bf23-410a-903e-ae58dffc3bad\") " pod="openshift-console-operator/console-operator-58897d9998-d8j77" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.112362 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r462p" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.113727 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/077abdec-a39e-43d3-bbba-8dd0afbb448b-config\") pod \"machine-api-operator-5694c8668f-bf2qf\" (UID: \"077abdec-a39e-43d3-bbba-8dd0afbb448b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bf2qf" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.113875 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w6gw"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.114265 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/de01cd94-d3c6-4fa5-9033-9a65a318ab82-console-config\") pod \"console-f9d7485db-dh45w\" (UID: \"de01cd94-d3c6-4fa5-9033-9a65a318ab82\") " pod="openshift-console/console-f9d7485db-dh45w" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.114745 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-c46m7"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.120874 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94d45853-c52b-46a4-b27d-b4d33b7527d9-serving-cert\") pod \"apiserver-7bbb656c7d-nx5d7\" (UID: \"94d45853-c52b-46a4-b27d-b4d33b7527d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nx5d7" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.119597 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fce0222-ff8a-4a92-8c75-bdbd44f509d5-config\") pod \"controller-manager-879f6c89f-24tbz\" (UID: \"2fce0222-ff8a-4a92-8c75-bdbd44f509d5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-24tbz" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.117845 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w6gw" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.121085 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-8jqkt"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.121221 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-c46m7" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.122050 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-8jqkt" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.122659 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rsctk"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.123760 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-24tbz"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.124724 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fce0222-ff8a-4a92-8c75-bdbd44f509d5-serving-cert\") pod \"controller-manager-879f6c89f-24tbz\" (UID: \"2fce0222-ff8a-4a92-8c75-bdbd44f509d5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-24tbz" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.124870 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rsctk" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.125089 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/94d45853-c52b-46a4-b27d-b4d33b7527d9-encryption-config\") pod \"apiserver-7bbb656c7d-nx5d7\" (UID: \"94d45853-c52b-46a4-b27d-b4d33b7527d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nx5d7" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.125634 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-l7559"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.126793 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l7559" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.128094 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/889906af-3050-48fa-816a-7f0631f0ce47-serving-cert\") pod \"openshift-config-operator-7777fb866f-8j7vw\" (UID: \"889906af-3050-48fa-816a-7f0631f0ce47\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8j7vw" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.136189 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/94d45853-c52b-46a4-b27d-b4d33b7527d9-etcd-client\") pod \"apiserver-7bbb656c7d-nx5d7\" (UID: \"94d45853-c52b-46a4-b27d-b4d33b7527d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nx5d7" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.141194 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c629b88-bf23-410a-903e-ae58dffc3bad-serving-cert\") pod \"console-operator-58897d9998-d8j77\" (UID: \"0c629b88-bf23-410a-903e-ae58dffc3bad\") " pod="openshift-console-operator/console-operator-58897d9998-d8j77" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.141694 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/077abdec-a39e-43d3-bbba-8dd0afbb448b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bf2qf\" (UID: \"077abdec-a39e-43d3-bbba-8dd0afbb448b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bf2qf" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.142597 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.143077 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.143233 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.144758 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.145250 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/de01cd94-d3c6-4fa5-9033-9a65a318ab82-console-serving-cert\") pod \"console-f9d7485db-dh45w\" (UID: \"de01cd94-d3c6-4fa5-9033-9a65a318ab82\") " pod="openshift-console/console-f9d7485db-dh45w" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.171309 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.172035 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/37172e17-0240-46d9-878a-f396d66ea307-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-zrrrs\" (UID: \"37172e17-0240-46d9-878a-f396d66ea307\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zrrrs" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.172188 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.179985 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2ljp"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.181415 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-kfhx5"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.182376 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2ljp" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.182670 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-8j7vw"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.183046 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-kfhx5" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.183471 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.183875 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.188156 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-t44xg"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.199268 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kpjc5"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.199361 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6vsq7"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.205180 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19195ba-d354-4a69-bc60-364ba77eead8-config\") pod \"kube-controller-manager-operator-78b949d7b-bbw9j\" (UID: \"d19195ba-d354-4a69-bc60-364ba77eead8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bbw9j" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.205257 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgzgk\" (UniqueName: \"kubernetes.io/projected/92e65cdf-ee57-4aed-91ca-644ebdc03a27-kube-api-access-wgzgk\") pod \"ingress-operator-5b745b69d9-xgkqk\" (UID: \"92e65cdf-ee57-4aed-91ca-644ebdc03a27\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xgkqk" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.205290 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e93440f-9668-477a-8df8-a8da4d088cc8-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-blkmf\" (UID: \"5e93440f-9668-477a-8df8-a8da4d088cc8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-blkmf" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.205309 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a6532e9-cca3-47cf-89e4-77fac53a5d18-serving-cert\") pod \"route-controller-manager-6576b87f9c-kpjc5\" (UID: \"9a6532e9-cca3-47cf-89e4-77fac53a5d18\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kpjc5" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.205335 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27e6ed8e-63f7-43a4-b162-3edc3c30c442-service-ca-bundle\") pod \"authentication-operator-69f744f599-87n5j\" (UID: \"27e6ed8e-63f7-43a4-b162-3edc3c30c442\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-87n5j" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.205363 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e361b2e-edcc-418f-861e-83b65fc2634a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6vsq7\" (UID: \"5e361b2e-edcc-418f-861e-83b65fc2634a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6vsq7" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.205380 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fltlx\" (UniqueName: \"kubernetes.io/projected/5e361b2e-edcc-418f-861e-83b65fc2634a-kube-api-access-fltlx\") pod \"openshift-controller-manager-operator-756b6f6bc6-6vsq7\" (UID: \"5e361b2e-edcc-418f-861e-83b65fc2634a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6vsq7" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.205397 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a6532e9-cca3-47cf-89e4-77fac53a5d18-client-ca\") pod \"route-controller-manager-6576b87f9c-kpjc5\" (UID: \"9a6532e9-cca3-47cf-89e4-77fac53a5d18\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kpjc5" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.205417 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hbdr\" (UniqueName: \"kubernetes.io/projected/d25a9c9a-efcd-4520-9f14-f0de4d9ed485-kube-api-access-4hbdr\") pod \"machine-approver-56656f9798-2jmvs\" (UID: \"d25a9c9a-efcd-4520-9f14-f0de4d9ed485\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2jmvs" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.205435 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e93440f-9668-477a-8df8-a8da4d088cc8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-blkmf\" (UID: \"5e93440f-9668-477a-8df8-a8da4d088cc8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-blkmf" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.205463 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/92e65cdf-ee57-4aed-91ca-644ebdc03a27-bound-sa-token\") pod \"ingress-operator-5b745b69d9-xgkqk\" (UID: \"92e65cdf-ee57-4aed-91ca-644ebdc03a27\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xgkqk" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.205480 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc7f9\" (UniqueName: \"kubernetes.io/projected/9a6532e9-cca3-47cf-89e4-77fac53a5d18-kube-api-access-sc7f9\") pod \"route-controller-manager-6576b87f9c-kpjc5\" (UID: \"9a6532e9-cca3-47cf-89e4-77fac53a5d18\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kpjc5" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.205507 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92e65cdf-ee57-4aed-91ca-644ebdc03a27-metrics-tls\") pod \"ingress-operator-5b745b69d9-xgkqk\" (UID: \"92e65cdf-ee57-4aed-91ca-644ebdc03a27\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xgkqk" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.205533 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b2a3a7d-590d-410c-a4b7-6e7a7729fc3d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vxgv9\" (UID: \"0b2a3a7d-590d-410c-a4b7-6e7a7729fc3d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vxgv9" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.205567 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/65264825-0c26-4bac-bac5-05966eddd111-srv-cert\") pod \"catalog-operator-68c6474976-s595q\" (UID: \"65264825-0c26-4bac-bac5-05966eddd111\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s595q" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.205588 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/65264825-0c26-4bac-bac5-05966eddd111-profile-collector-cert\") pod \"catalog-operator-68c6474976-s595q\" (UID: \"65264825-0c26-4bac-bac5-05966eddd111\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s595q" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.205612 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d25a9c9a-efcd-4520-9f14-f0de4d9ed485-config\") pod \"machine-approver-56656f9798-2jmvs\" (UID: \"d25a9c9a-efcd-4520-9f14-f0de4d9ed485\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2jmvs" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.205630 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/25c5713b-847b-4f3f-9a76-2ded118334b3-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-zhzk5\" (UID: \"25c5713b-847b-4f3f-9a76-2ded118334b3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zhzk5" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.205648 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flch4\" (UniqueName: \"kubernetes.io/projected/25c5713b-847b-4f3f-9a76-2ded118334b3-kube-api-access-flch4\") pod \"cluster-image-registry-operator-dc59b4c8b-zhzk5\" (UID: \"25c5713b-847b-4f3f-9a76-2ded118334b3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zhzk5" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.205667 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d25a9c9a-efcd-4520-9f14-f0de4d9ed485-machine-approver-tls\") pod \"machine-approver-56656f9798-2jmvs\" (UID: \"d25a9c9a-efcd-4520-9f14-f0de4d9ed485\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2jmvs" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.205697 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a6532e9-cca3-47cf-89e4-77fac53a5d18-config\") pod \"route-controller-manager-6576b87f9c-kpjc5\" (UID: \"9a6532e9-cca3-47cf-89e4-77fac53a5d18\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kpjc5" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.205723 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brfgq\" (UniqueName: \"kubernetes.io/projected/0b2a3a7d-590d-410c-a4b7-6e7a7729fc3d-kube-api-access-brfgq\") pod \"openshift-apiserver-operator-796bbdcf4f-vxgv9\" (UID: \"0b2a3a7d-590d-410c-a4b7-6e7a7729fc3d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vxgv9" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.205746 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-667qb\" (UniqueName: \"kubernetes.io/projected/65264825-0c26-4bac-bac5-05966eddd111-kube-api-access-667qb\") pod \"catalog-operator-68c6474976-s595q\" (UID: \"65264825-0c26-4bac-bac5-05966eddd111\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s595q" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.205769 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/92e65cdf-ee57-4aed-91ca-644ebdc03a27-trusted-ca\") pod \"ingress-operator-5b745b69d9-xgkqk\" (UID: \"92e65cdf-ee57-4aed-91ca-644ebdc03a27\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xgkqk" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.205798 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27e6ed8e-63f7-43a4-b162-3edc3c30c442-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-87n5j\" (UID: \"27e6ed8e-63f7-43a4-b162-3edc3c30c442\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-87n5j" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.205826 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d19195ba-d354-4a69-bc60-364ba77eead8-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-bbw9j\" (UID: \"d19195ba-d354-4a69-bc60-364ba77eead8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bbw9j" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.205852 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/25c5713b-847b-4f3f-9a76-2ded118334b3-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-zhzk5\" (UID: \"25c5713b-847b-4f3f-9a76-2ded118334b3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zhzk5" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.205876 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d25a9c9a-efcd-4520-9f14-f0de4d9ed485-auth-proxy-config\") pod \"machine-approver-56656f9798-2jmvs\" (UID: \"d25a9c9a-efcd-4520-9f14-f0de4d9ed485\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2jmvs" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.205903 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19195ba-d354-4a69-bc60-364ba77eead8-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-bbw9j\" (UID: \"d19195ba-d354-4a69-bc60-364ba77eead8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bbw9j" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.205933 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/25c5713b-847b-4f3f-9a76-2ded118334b3-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-zhzk5\" (UID: \"25c5713b-847b-4f3f-9a76-2ded118334b3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zhzk5" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.205955 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27e6ed8e-63f7-43a4-b162-3edc3c30c442-serving-cert\") pod \"authentication-operator-69f744f599-87n5j\" (UID: \"27e6ed8e-63f7-43a4-b162-3edc3c30c442\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-87n5j" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.205987 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfb2l\" (UniqueName: \"kubernetes.io/projected/27e6ed8e-63f7-43a4-b162-3edc3c30c442-kube-api-access-jfb2l\") pod \"authentication-operator-69f744f599-87n5j\" (UID: \"27e6ed8e-63f7-43a4-b162-3edc3c30c442\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-87n5j" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.206014 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b2a3a7d-590d-410c-a4b7-6e7a7729fc3d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vxgv9\" (UID: \"0b2a3a7d-590d-410c-a4b7-6e7a7729fc3d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vxgv9" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.206034 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e361b2e-edcc-418f-861e-83b65fc2634a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6vsq7\" (UID: \"5e361b2e-edcc-418f-861e-83b65fc2634a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6vsq7" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.206052 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e93440f-9668-477a-8df8-a8da4d088cc8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-blkmf\" (UID: \"5e93440f-9668-477a-8df8-a8da4d088cc8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-blkmf" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.206069 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27e6ed8e-63f7-43a4-b162-3edc3c30c442-config\") pod \"authentication-operator-69f744f599-87n5j\" (UID: \"27e6ed8e-63f7-43a4-b162-3edc3c30c442\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-87n5j" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.210897 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-dh45w"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.210968 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vxgv9"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.210983 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-xgkqk"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.212931 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27e6ed8e-63f7-43a4-b162-3edc3c30c442-config\") pod \"authentication-operator-69f744f599-87n5j\" (UID: \"27e6ed8e-63f7-43a4-b162-3edc3c30c442\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-87n5j" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.214488 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d25a9c9a-efcd-4520-9f14-f0de4d9ed485-auth-proxy-config\") pod \"machine-approver-56656f9798-2jmvs\" (UID: \"d25a9c9a-efcd-4520-9f14-f0de4d9ed485\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2jmvs" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.215424 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a6532e9-cca3-47cf-89e4-77fac53a5d18-config\") pod \"route-controller-manager-6576b87f9c-kpjc5\" (UID: \"9a6532e9-cca3-47cf-89e4-77fac53a5d18\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kpjc5" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.216775 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27e6ed8e-63f7-43a4-b162-3edc3c30c442-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-87n5j\" (UID: \"27e6ed8e-63f7-43a4-b162-3edc3c30c442\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-87n5j" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.216919 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b2a3a7d-590d-410c-a4b7-6e7a7729fc3d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vxgv9\" (UID: \"0b2a3a7d-590d-410c-a4b7-6e7a7729fc3d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vxgv9" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.218394 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a6532e9-cca3-47cf-89e4-77fac53a5d18-client-ca\") pod \"route-controller-manager-6576b87f9c-kpjc5\" (UID: \"9a6532e9-cca3-47cf-89e4-77fac53a5d18\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kpjc5" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.218675 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.219417 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d25a9c9a-efcd-4520-9f14-f0de4d9ed485-config\") pod \"machine-approver-56656f9798-2jmvs\" (UID: \"d25a9c9a-efcd-4520-9f14-f0de4d9ed485\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2jmvs" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.219996 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.225563 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.220198 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e361b2e-edcc-418f-861e-83b65fc2634a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6vsq7\" (UID: \"5e361b2e-edcc-418f-861e-83b65fc2634a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6vsq7" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.221225 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27e6ed8e-63f7-43a4-b162-3edc3c30c442-service-ca-bundle\") pod \"authentication-operator-69f744f599-87n5j\" (UID: \"27e6ed8e-63f7-43a4-b162-3edc3c30c442\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-87n5j" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.228390 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f2zf8"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.229406 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a6532e9-cca3-47cf-89e4-77fac53a5d18-serving-cert\") pod \"route-controller-manager-6576b87f9c-kpjc5\" (UID: \"9a6532e9-cca3-47cf-89e4-77fac53a5d18\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kpjc5" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.230114 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27e6ed8e-63f7-43a4-b162-3edc3c30c442-serving-cert\") pod \"authentication-operator-69f744f599-87n5j\" (UID: \"27e6ed8e-63f7-43a4-b162-3edc3c30c442\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-87n5j" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.230145 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e361b2e-edcc-418f-861e-83b65fc2634a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6vsq7\" (UID: \"5e361b2e-edcc-418f-861e-83b65fc2634a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6vsq7" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.230773 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/25c5713b-847b-4f3f-9a76-2ded118334b3-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-zhzk5\" (UID: \"25c5713b-847b-4f3f-9a76-2ded118334b3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zhzk5" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.232971 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/25c5713b-847b-4f3f-9a76-2ded118334b3-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-zhzk5\" (UID: \"25c5713b-847b-4f3f-9a76-2ded118334b3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zhzk5" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.239934 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d25a9c9a-efcd-4520-9f14-f0de4d9ed485-machine-approver-tls\") pod \"machine-approver-56656f9798-2jmvs\" (UID: \"d25a9c9a-efcd-4520-9f14-f0de4d9ed485\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2jmvs" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.240346 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b2a3a7d-590d-410c-a4b7-6e7a7729fc3d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vxgv9\" (UID: \"0b2a3a7d-590d-410c-a4b7-6e7a7729fc3d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vxgv9" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.242864 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.244887 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bbw9j"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.244960 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-b5p7b"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.258114 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zhzk5"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.260988 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bf2qf"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.262049 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-dz4cb"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.263304 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-nx5d7"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.264772 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-d8j77"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.265541 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w6gw"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.267805 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524920-kwjxq"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.268831 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-p4kkk"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.269784 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-tnfdb"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.270360 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-tnfdb" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.273166 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-87n5j"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.275491 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s595q"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.277685 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-r462p"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.278360 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.278677 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rsctk"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.279488 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-8jqkt"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.280825 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-kfhx5"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.282057 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-w5ffb"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.282589 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-blkmf"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.282727 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.284635 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-bsbln"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.285810 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2ljp"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.287171 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8wjls"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.288396 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-569hh"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.289152 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-c46m7"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.290504 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zrd6d"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.292112 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-84dtz"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.293511 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mxhh7"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.294809 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-9668s"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.296705 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9668s" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.299387 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-l7559"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.301511 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-9668s"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.302767 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.319651 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-ntk89"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.321209 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-ntk89" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.321721 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-ntk89"] Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.323300 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.342982 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.348443 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92e65cdf-ee57-4aed-91ca-644ebdc03a27-metrics-tls\") pod \"ingress-operator-5b745b69d9-xgkqk\" (UID: \"92e65cdf-ee57-4aed-91ca-644ebdc03a27\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xgkqk" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.362243 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.382548 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.392595 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e93440f-9668-477a-8df8-a8da4d088cc8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-blkmf\" (UID: \"5e93440f-9668-477a-8df8-a8da4d088cc8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-blkmf" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.409656 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.416957 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/92e65cdf-ee57-4aed-91ca-644ebdc03a27-trusted-ca\") pod \"ingress-operator-5b745b69d9-xgkqk\" (UID: \"92e65cdf-ee57-4aed-91ca-644ebdc03a27\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xgkqk" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.423508 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.443655 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.447751 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e93440f-9668-477a-8df8-a8da4d088cc8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-blkmf\" (UID: \"5e93440f-9668-477a-8df8-a8da4d088cc8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-blkmf" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.463592 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.482504 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.502275 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.522875 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.542843 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.562210 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.582657 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.589612 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19195ba-d354-4a69-bc60-364ba77eead8-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-bbw9j\" (UID: \"d19195ba-d354-4a69-bc60-364ba77eead8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bbw9j" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.602476 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.604642 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19195ba-d354-4a69-bc60-364ba77eead8-config\") pod \"kube-controller-manager-operator-78b949d7b-bbw9j\" (UID: \"d19195ba-d354-4a69-bc60-364ba77eead8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bbw9j" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.622732 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.642046 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.650814 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/65264825-0c26-4bac-bac5-05966eddd111-srv-cert\") pod \"catalog-operator-68c6474976-s595q\" (UID: \"65264825-0c26-4bac-bac5-05966eddd111\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s595q" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.663053 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.673836 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/65264825-0c26-4bac-bac5-05966eddd111-profile-collector-cert\") pod \"catalog-operator-68c6474976-s595q\" (UID: \"65264825-0c26-4bac-bac5-05966eddd111\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s595q" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.684034 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.702743 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.813308 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.813596 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.820314 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.822899 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.843372 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.862807 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.882987 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.903266 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.905644 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.923015 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.942987 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.963875 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 19 10:10:48 crc kubenswrapper[4811]: I0219 10:10:48.982897 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.003407 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.022341 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.042675 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.063025 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.083185 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.100365 4811 request.go:700] Waited for 1.001606784s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa/token Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.119700 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qtfv\" (UniqueName: \"kubernetes.io/projected/94d45853-c52b-46a4-b27d-b4d33b7527d9-kube-api-access-6qtfv\") pod \"apiserver-7bbb656c7d-nx5d7\" (UID: \"94d45853-c52b-46a4-b27d-b4d33b7527d9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nx5d7" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.139828 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt8bx\" (UniqueName: \"kubernetes.io/projected/2fce0222-ff8a-4a92-8c75-bdbd44f509d5-kube-api-access-dt8bx\") pod \"controller-manager-879f6c89f-24tbz\" (UID: \"2fce0222-ff8a-4a92-8c75-bdbd44f509d5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-24tbz" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.159327 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhr78\" (UniqueName: \"kubernetes.io/projected/077abdec-a39e-43d3-bbba-8dd0afbb448b-kube-api-access-bhr78\") pod \"machine-api-operator-5694c8668f-bf2qf\" (UID: \"077abdec-a39e-43d3-bbba-8dd0afbb448b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bf2qf" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.162117 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.183597 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.190486 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-bf2qf" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.211920 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.212177 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nx5d7" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.223133 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.243039 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.262651 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.283090 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.325159 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.326400 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kttl5\" (UniqueName: \"kubernetes.io/projected/de01cd94-d3c6-4fa5-9033-9a65a318ab82-kube-api-access-kttl5\") pod \"console-f9d7485db-dh45w\" (UID: \"de01cd94-d3c6-4fa5-9033-9a65a318ab82\") " pod="openshift-console/console-f9d7485db-dh45w" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.362686 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.363874 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr9m9\" (UniqueName: \"kubernetes.io/projected/47f24fcb-a536-4173-b099-9fadf4be99c6-kube-api-access-xr9m9\") pod \"downloads-7954f5f757-t44xg\" (UID: \"47f24fcb-a536-4173-b099-9fadf4be99c6\") " pod="openshift-console/downloads-7954f5f757-t44xg" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.403209 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.406963 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6gfn\" (UniqueName: \"kubernetes.io/projected/889906af-3050-48fa-816a-7f0631f0ce47-kube-api-access-h6gfn\") pod \"openshift-config-operator-7777fb866f-8j7vw\" (UID: \"889906af-3050-48fa-816a-7f0631f0ce47\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8j7vw" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.423824 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.431885 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-nx5d7"] Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.438025 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-24tbz" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.443596 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.443937 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bf2qf"] Feb 19 10:10:49 crc kubenswrapper[4811]: W0219 10:10:49.443970 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94d45853_c52b_46a4_b27d_b4d33b7527d9.slice/crio-d67526f955f4500df4bde755b9cdb9b0c680f52876264ef6e75564f0ed1f8349 WatchSource:0}: Error finding container d67526f955f4500df4bde755b9cdb9b0c680f52876264ef6e75564f0ed1f8349: Status 404 returned error can't find the container with id d67526f955f4500df4bde755b9cdb9b0c680f52876264ef6e75564f0ed1f8349 Feb 19 10:10:49 crc kubenswrapper[4811]: W0219 10:10:49.454308 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod077abdec_a39e_43d3_bbba_8dd0afbb448b.slice/crio-e2d18b05e5d15ce02123a806dd4f29b591757d59fc60feeace2af0180183cb30 WatchSource:0}: Error finding container e2d18b05e5d15ce02123a806dd4f29b591757d59fc60feeace2af0180183cb30: Status 404 returned error can't find the container with id e2d18b05e5d15ce02123a806dd4f29b591757d59fc60feeace2af0180183cb30 Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.466466 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8j7vw" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.487005 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2fvs\" (UniqueName: \"kubernetes.io/projected/0c629b88-bf23-410a-903e-ae58dffc3bad-kube-api-access-j2fvs\") pod \"console-operator-58897d9998-d8j77\" (UID: \"0c629b88-bf23-410a-903e-ae58dffc3bad\") " pod="openshift-console-operator/console-operator-58897d9998-d8j77" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.493266 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bf2qf" event={"ID":"077abdec-a39e-43d3-bbba-8dd0afbb448b","Type":"ContainerStarted","Data":"e2d18b05e5d15ce02123a806dd4f29b591757d59fc60feeace2af0180183cb30"} Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.496723 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nx5d7" event={"ID":"94d45853-c52b-46a4-b27d-b4d33b7527d9","Type":"ContainerStarted","Data":"d67526f955f4500df4bde755b9cdb9b0c680f52876264ef6e75564f0ed1f8349"} Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.499305 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5kxc\" (UniqueName: \"kubernetes.io/projected/37172e17-0240-46d9-878a-f396d66ea307-kube-api-access-r5kxc\") pod \"cluster-samples-operator-665b6dd947-zrrrs\" (UID: \"37172e17-0240-46d9-878a-f396d66ea307\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zrrrs" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.502931 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.523669 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.535313 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-dh45w" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.543489 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.549609 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-t44xg" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.563047 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.582995 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.583049 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.583065 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.583094 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-d8j77" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.583202 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.586859 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.603095 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.623920 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.643429 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.666510 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.685184 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.694753 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-24tbz"] Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.707467 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.722868 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.733183 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-8j7vw"] Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.742798 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.746644 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zrrrs" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.763565 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.782531 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.803902 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.816118 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-t44xg"] Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.826044 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.843022 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.860108 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-dh45w"] Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.865101 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 19 10:10:49 crc kubenswrapper[4811]: W0219 10:10:49.883335 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde01cd94_d3c6_4fa5_9033_9a65a318ab82.slice/crio-f3b290e0ef18510fc5fea8cefdbed8b9af613450e682d03c358f3ed17fd9925b WatchSource:0}: Error finding container f3b290e0ef18510fc5fea8cefdbed8b9af613450e682d03c358f3ed17fd9925b: Status 404 returned error can't find the container with id f3b290e0ef18510fc5fea8cefdbed8b9af613450e682d03c358f3ed17fd9925b Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.883521 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.899281 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-d8j77"] Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.902977 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.922558 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.971037 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flch4\" (UniqueName: \"kubernetes.io/projected/25c5713b-847b-4f3f-9a76-2ded118334b3-kube-api-access-flch4\") pod \"cluster-image-registry-operator-dc59b4c8b-zhzk5\" (UID: \"25c5713b-847b-4f3f-9a76-2ded118334b3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zhzk5" Feb 19 10:10:49 crc kubenswrapper[4811]: I0219 10:10:49.996120 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hbdr\" (UniqueName: \"kubernetes.io/projected/d25a9c9a-efcd-4520-9f14-f0de4d9ed485-kube-api-access-4hbdr\") pod \"machine-approver-56656f9798-2jmvs\" (UID: \"d25a9c9a-efcd-4520-9f14-f0de4d9ed485\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2jmvs" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.003677 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2jmvs" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.008866 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgzgk\" (UniqueName: \"kubernetes.io/projected/92e65cdf-ee57-4aed-91ca-644ebdc03a27-kube-api-access-wgzgk\") pod \"ingress-operator-5b745b69d9-xgkqk\" (UID: \"92e65cdf-ee57-4aed-91ca-644ebdc03a27\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xgkqk" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.018061 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zrrrs"] Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.020481 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e93440f-9668-477a-8df8-a8da4d088cc8-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-blkmf\" (UID: \"5e93440f-9668-477a-8df8-a8da4d088cc8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-blkmf" Feb 19 10:10:50 crc kubenswrapper[4811]: W0219 10:10:50.031467 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd25a9c9a_efcd_4520_9f14_f0de4d9ed485.slice/crio-ffaf2fe9cd337a26c1c4750600e10dc1ec7a6d717664e7915be6a5e7f2f98faf WatchSource:0}: Error finding container ffaf2fe9cd337a26c1c4750600e10dc1ec7a6d717664e7915be6a5e7f2f98faf: Status 404 returned error can't find the container with id ffaf2fe9cd337a26c1c4750600e10dc1ec7a6d717664e7915be6a5e7f2f98faf Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.040896 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/92e65cdf-ee57-4aed-91ca-644ebdc03a27-bound-sa-token\") pod \"ingress-operator-5b745b69d9-xgkqk\" (UID: \"92e65cdf-ee57-4aed-91ca-644ebdc03a27\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xgkqk" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.067997 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc7f9\" (UniqueName: \"kubernetes.io/projected/9a6532e9-cca3-47cf-89e4-77fac53a5d18-kube-api-access-sc7f9\") pod \"route-controller-manager-6576b87f9c-kpjc5\" (UID: \"9a6532e9-cca3-47cf-89e4-77fac53a5d18\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kpjc5" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.086615 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfb2l\" (UniqueName: \"kubernetes.io/projected/27e6ed8e-63f7-43a4-b162-3edc3c30c442-kube-api-access-jfb2l\") pod \"authentication-operator-69f744f599-87n5j\" (UID: \"27e6ed8e-63f7-43a4-b162-3edc3c30c442\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-87n5j" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.090527 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-87n5j" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.098375 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/25c5713b-847b-4f3f-9a76-2ded118334b3-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-zhzk5\" (UID: \"25c5713b-847b-4f3f-9a76-2ded118334b3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zhzk5" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.100288 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zhzk5" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.121000 4811 request.go:700] Waited for 1.905255495s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.124283 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brfgq\" (UniqueName: \"kubernetes.io/projected/0b2a3a7d-590d-410c-a4b7-6e7a7729fc3d-kube-api-access-brfgq\") pod \"openshift-apiserver-operator-796bbdcf4f-vxgv9\" (UID: \"0b2a3a7d-590d-410c-a4b7-6e7a7729fc3d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vxgv9" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.124285 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xgkqk" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.131933 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-blkmf" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.149129 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-667qb\" (UniqueName: \"kubernetes.io/projected/65264825-0c26-4bac-bac5-05966eddd111-kube-api-access-667qb\") pod \"catalog-operator-68c6474976-s595q\" (UID: \"65264825-0c26-4bac-bac5-05966eddd111\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s595q" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.163742 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d19195ba-d354-4a69-bc60-364ba77eead8-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-bbw9j\" (UID: \"d19195ba-d354-4a69-bc60-364ba77eead8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bbw9j" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.164112 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s595q" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.179563 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fltlx\" (UniqueName: \"kubernetes.io/projected/5e361b2e-edcc-418f-861e-83b65fc2634a-kube-api-access-fltlx\") pod \"openshift-controller-manager-operator-756b6f6bc6-6vsq7\" (UID: \"5e361b2e-edcc-418f-861e-83b65fc2634a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6vsq7" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.182393 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.201096 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kpjc5" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.206577 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.224170 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.249607 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.264894 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.275892 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6vsq7" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.282602 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.307529 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.328332 4811 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.344071 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.358978 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vxgv9" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.363267 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.427419 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.453223 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.465154 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.468967 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-xgkqk"] Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.479217 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bbw9j" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.482541 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.482585 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d9c16aff-60a4-404a-b3f6-bc951d28e931-ca-trust-extracted\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.482756 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d9c16aff-60a4-404a-b3f6-bc951d28e931-bound-sa-token\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.482877 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d9c16aff-60a4-404a-b3f6-bc951d28e931-installation-pull-secrets\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.482956 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d9c16aff-60a4-404a-b3f6-bc951d28e931-registry-certificates\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.482990 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d9c16aff-60a4-404a-b3f6-bc951d28e931-registry-tls\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.483059 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtrdh\" (UniqueName: \"kubernetes.io/projected/d9c16aff-60a4-404a-b3f6-bc951d28e931-kube-api-access-wtrdh\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.483151 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9c16aff-60a4-404a-b3f6-bc951d28e931-trusted-ca\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.483235 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:50 crc kubenswrapper[4811]: E0219 10:10:50.484738 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:50.984722371 +0000 UTC m=+139.511187057 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.504080 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.522948 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.532743 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-24tbz" event={"ID":"2fce0222-ff8a-4a92-8c75-bdbd44f509d5","Type":"ContainerStarted","Data":"6a59f9e5de37b5023d70f4c43f5ec5f80c9b0b93b2e128a46d2934b0ec6fa7bd"} Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.532819 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-24tbz" event={"ID":"2fce0222-ff8a-4a92-8c75-bdbd44f509d5","Type":"ContainerStarted","Data":"a9b1c9f42a3f66118bcde72c8c05aeb2c9605ea98cfc5f89fbd3e85a23cc4edc"} Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.532843 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-24tbz" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.538633 4811 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-24tbz container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.538700 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-24tbz" podUID="2fce0222-ff8a-4a92-8c75-bdbd44f509d5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.565574 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bf2qf" event={"ID":"077abdec-a39e-43d3-bbba-8dd0afbb448b","Type":"ContainerStarted","Data":"22fd0dfba63766af1bc250e75a6ac4956f9f810200858211f7df3a3684d35217"} Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.565659 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bf2qf" event={"ID":"077abdec-a39e-43d3-bbba-8dd0afbb448b","Type":"ContainerStarted","Data":"ec20acaeb97c5ea0ad25087aecdaabe02bf4764e97ff44bc19235b3e470fe8d5"} Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.568409 4811 generic.go:334] "Generic (PLEG): container finished" podID="889906af-3050-48fa-816a-7f0631f0ce47" containerID="a9042b1f37e12b4519ea9ba756d50bbda3dbeac81565bfa74d9093a8311b6519" exitCode=0 Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.568508 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8j7vw" event={"ID":"889906af-3050-48fa-816a-7f0631f0ce47","Type":"ContainerDied","Data":"a9042b1f37e12b4519ea9ba756d50bbda3dbeac81565bfa74d9093a8311b6519"} Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.568549 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8j7vw" event={"ID":"889906af-3050-48fa-816a-7f0631f0ce47","Type":"ContainerStarted","Data":"dd10b1563c77260f673041360b95e2931d0762338642843a08c1d5ed97db084a"} Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.569431 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xgkqk" event={"ID":"92e65cdf-ee57-4aed-91ca-644ebdc03a27","Type":"ContainerStarted","Data":"68aa78b2a23e789999c37acee9435b9bdebda2c336c660cb5763b1dcb68fb04d"} Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.571981 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-t44xg" event={"ID":"47f24fcb-a536-4173-b099-9fadf4be99c6","Type":"ContainerStarted","Data":"f49fefb562829c2225cf21672edab892f9f5e5ff8c83ad804b60fa46151418da"} Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.572010 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-t44xg" event={"ID":"47f24fcb-a536-4173-b099-9fadf4be99c6","Type":"ContainerStarted","Data":"9a2db6e9d3124e88e7aefc7ab2eab77973d528a2e952867ea864dd3a0839db32"} Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.572264 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-t44xg" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.576927 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-dh45w" event={"ID":"de01cd94-d3c6-4fa5-9033-9a65a318ab82","Type":"ContainerStarted","Data":"1f63ccaffb514886348a130ee9ca596458c79e06c239df9932d9661bcfb6db6b"} Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.576972 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-dh45w" event={"ID":"de01cd94-d3c6-4fa5-9033-9a65a318ab82","Type":"ContainerStarted","Data":"f3b290e0ef18510fc5fea8cefdbed8b9af613450e682d03c358f3ed17fd9925b"} Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.580192 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zrrrs" event={"ID":"37172e17-0240-46d9-878a-f396d66ea307","Type":"ContainerStarted","Data":"3548233274d3d6a473831bb431805e2b676dfdccdaf1a4faeb80fb0bd000dae8"} Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.585339 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.585556 4811 generic.go:334] "Generic (PLEG): container finished" podID="94d45853-c52b-46a4-b27d-b4d33b7527d9" containerID="d7001fb22f7cafe749f5da62a533d8364a1272bf109af9389d58c2d015ea9184" exitCode=0 Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.585670 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96e56630-78f7-4f76-9d24-25ca68e8fa0a-metrics-tls\") pod \"dns-operator-744455d44c-bsbln\" (UID: \"96e56630-78f7-4f76-9d24-25ca68e8fa0a\") " pod="openshift-dns-operator/dns-operator-744455d44c-bsbln" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.585735 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086-audit-dir\") pod \"apiserver-76f77b778f-b5p7b\" (UID: \"f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086\") " pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.585768 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d9c16aff-60a4-404a-b3f6-bc951d28e931-installation-pull-secrets\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.585879 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086-encryption-config\") pod \"apiserver-76f77b778f-b5p7b\" (UID: \"f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086\") " pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.585911 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.585935 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.585977 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d9c16aff-60a4-404a-b3f6-bc951d28e931-registry-certificates\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.586000 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086-etcd-serving-ca\") pod \"apiserver-76f77b778f-b5p7b\" (UID: \"f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086\") " pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.586022 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dfe3823-e78b-4f62-b928-96b36d172dac-config\") pod \"etcd-operator-b45778765-zrd6d\" (UID: \"4dfe3823-e78b-4f62-b928-96b36d172dac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrd6d" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.586048 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086-image-import-ca\") pod \"apiserver-76f77b778f-b5p7b\" (UID: \"f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086\") " pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.586277 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4dfe3823-e78b-4f62-b928-96b36d172dac-serving-cert\") pod \"etcd-operator-b45778765-zrd6d\" (UID: \"4dfe3823-e78b-4f62-b928-96b36d172dac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrd6d" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.586305 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4dfe3823-e78b-4f62-b928-96b36d172dac-etcd-service-ca\") pod \"etcd-operator-b45778765-zrd6d\" (UID: \"4dfe3823-e78b-4f62-b928-96b36d172dac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrd6d" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.586329 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.586356 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d9c16aff-60a4-404a-b3f6-bc951d28e931-registry-tls\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.586379 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.586405 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086-etcd-client\") pod \"apiserver-76f77b778f-b5p7b\" (UID: \"f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086\") " pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.586481 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.586509 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4dfe3823-e78b-4f62-b928-96b36d172dac-etcd-client\") pod \"etcd-operator-b45778765-zrd6d\" (UID: \"4dfe3823-e78b-4f62-b928-96b36d172dac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrd6d" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.586530 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4dfe3823-e78b-4f62-b928-96b36d172dac-etcd-ca\") pod \"etcd-operator-b45778765-zrd6d\" (UID: \"4dfe3823-e78b-4f62-b928-96b36d172dac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrd6d" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.586555 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtrdh\" (UniqueName: \"kubernetes.io/projected/d9c16aff-60a4-404a-b3f6-bc951d28e931-kube-api-access-wtrdh\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.586580 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086-node-pullsecrets\") pod \"apiserver-76f77b778f-b5p7b\" (UID: \"f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086\") " pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.586601 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmvrw\" (UniqueName: \"kubernetes.io/projected/96e56630-78f7-4f76-9d24-25ca68e8fa0a-kube-api-access-pmvrw\") pod \"dns-operator-744455d44c-bsbln\" (UID: \"96e56630-78f7-4f76-9d24-25ca68e8fa0a\") " pod="openshift-dns-operator/dns-operator-744455d44c-bsbln" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.586627 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086-audit\") pod \"apiserver-76f77b778f-b5p7b\" (UID: \"f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086\") " pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.586667 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9c16aff-60a4-404a-b3f6-bc951d28e931-trusted-ca\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.586694 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086-serving-cert\") pod \"apiserver-76f77b778f-b5p7b\" (UID: \"f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086\") " pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.586721 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.586758 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086-trusted-ca-bundle\") pod \"apiserver-76f77b778f-b5p7b\" (UID: \"f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086\") " pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.586792 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h4n8\" (UniqueName: \"kubernetes.io/projected/f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086-kube-api-access-9h4n8\") pod \"apiserver-76f77b778f-b5p7b\" (UID: \"f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086\") " pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.586816 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h86wz\" (UniqueName: \"kubernetes.io/projected/4dfe3823-e78b-4f62-b928-96b36d172dac-kube-api-access-h86wz\") pod \"etcd-operator-b45778765-zrd6d\" (UID: \"4dfe3823-e78b-4f62-b928-96b36d172dac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrd6d" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.586842 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-audit-dir\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.586868 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.586896 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-audit-policies\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.586948 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcks8\" (UniqueName: \"kubernetes.io/projected/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-kube-api-access-xcks8\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.586971 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086-config\") pod \"apiserver-76f77b778f-b5p7b\" (UID: \"f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086\") " pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.586994 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.587022 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.587105 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.587171 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d9c16aff-60a4-404a-b3f6-bc951d28e931-ca-trust-extracted\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.587194 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d9c16aff-60a4-404a-b3f6-bc951d28e931-bound-sa-token\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.587217 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.589251 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d9c16aff-60a4-404a-b3f6-bc951d28e931-registry-certificates\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.589245 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d9c16aff-60a4-404a-b3f6-bc951d28e931-ca-trust-extracted\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:50 crc kubenswrapper[4811]: E0219 10:10:50.589804 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:51.089556459 +0000 UTC m=+139.616021345 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.593104 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d9c16aff-60a4-404a-b3f6-bc951d28e931-installation-pull-secrets\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.593403 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-d8j77" event={"ID":"0c629b88-bf23-410a-903e-ae58dffc3bad","Type":"ContainerStarted","Data":"ee1af577b644e09e48f3f012ab9d9624ccd2c74ecd1c02e0c547e8dc698f8877"} Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.593441 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nx5d7" event={"ID":"94d45853-c52b-46a4-b27d-b4d33b7527d9","Type":"ContainerDied","Data":"d7001fb22f7cafe749f5da62a533d8364a1272bf109af9389d58c2d015ea9184"} Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.593487 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-d8j77" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.593503 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2jmvs" event={"ID":"d25a9c9a-efcd-4520-9f14-f0de4d9ed485","Type":"ContainerStarted","Data":"ffaf2fe9cd337a26c1c4750600e10dc1ec7a6d717664e7915be6a5e7f2f98faf"} Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.593681 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d9c16aff-60a4-404a-b3f6-bc951d28e931-registry-tls\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.594821 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9c16aff-60a4-404a-b3f6-bc951d28e931-trusted-ca\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.605597 4811 patch_prober.go:28] interesting pod/console-operator-58897d9998-d8j77 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.605707 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-d8j77" podUID="0c629b88-bf23-410a-903e-ae58dffc3bad" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.605953 4811 patch_prober.go:28] interesting pod/downloads-7954f5f757-t44xg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.606026 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t44xg" podUID="47f24fcb-a536-4173-b099-9fadf4be99c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.650586 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtrdh\" (UniqueName: \"kubernetes.io/projected/d9c16aff-60a4-404a-b3f6-bc951d28e931-kube-api-access-wtrdh\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.669575 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d9c16aff-60a4-404a-b3f6-bc951d28e931-bound-sa-token\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.687930 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62dd3b62-846d-4fc2-aefe-2da8f53ac288-config\") pod \"kube-apiserver-operator-766d6c64bb-f2ljp\" (UID: \"62dd3b62-846d-4fc2-aefe-2da8f53ac288\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2ljp" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.687964 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqvst\" (UniqueName: \"kubernetes.io/projected/3fefb44b-8bb5-44f8-be2b-4f0ada1e3899-kube-api-access-wqvst\") pod \"collect-profiles-29524920-kwjxq\" (UID: \"3fefb44b-8bb5-44f8-be2b-4f0ada1e3899\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-kwjxq" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.687983 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/420c2489-d813-4281-8a38-ecc47f25b991-apiservice-cert\") pod \"packageserver-d55dfcdfc-4w6gw\" (UID: \"420c2489-d813-4281-8a38-ecc47f25b991\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w6gw" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.687999 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eecb463-84f2-45b0-8281-be71a8ec27e9-serving-cert\") pod \"service-ca-operator-777779d784-dz4cb\" (UID: \"4eecb463-84f2-45b0-8281-be71a8ec27e9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dz4cb" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.688019 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.688035 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f41cc81f-afdb-461f-a860-c8abbcb84633-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rsctk\" (UID: \"f41cc81f-afdb-461f-a860-c8abbcb84633\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rsctk" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.688076 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3fefb44b-8bb5-44f8-be2b-4f0ada1e3899-secret-volume\") pod \"collect-profiles-29524920-kwjxq\" (UID: \"3fefb44b-8bb5-44f8-be2b-4f0ada1e3899\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-kwjxq" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.688122 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a71943f3-8cb9-48ff-aaf0-38a30132ffd4-mountpoint-dir\") pod \"csi-hostpathplugin-ntk89\" (UID: \"a71943f3-8cb9-48ff-aaf0-38a30132ffd4\") " pod="hostpath-provisioner/csi-hostpathplugin-ntk89" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.688138 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3ab75343-a016-48d3-930c-e391f08dc9e8-proxy-tls\") pod \"machine-config-operator-74547568cd-r462p\" (UID: \"3ab75343-a016-48d3-930c-e391f08dc9e8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r462p" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.688171 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eecb463-84f2-45b0-8281-be71a8ec27e9-config\") pod \"service-ca-operator-777779d784-dz4cb\" (UID: \"4eecb463-84f2-45b0-8281-be71a8ec27e9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dz4cb" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.688209 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/fdd47192-fa50-42cb-8679-fba3f93df36f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-569hh\" (UID: \"fdd47192-fa50-42cb-8679-fba3f93df36f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-569hh" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.688228 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/616355ce-0d45-4eb0-bcf2-0e58f2fb7fcf-node-bootstrap-token\") pod \"machine-config-server-tnfdb\" (UID: \"616355ce-0d45-4eb0-bcf2-0e58f2fb7fcf\") " pod="openshift-machine-config-operator/machine-config-server-tnfdb" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.688253 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96e56630-78f7-4f76-9d24-25ca68e8fa0a-metrics-tls\") pod \"dns-operator-744455d44c-bsbln\" (UID: \"96e56630-78f7-4f76-9d24-25ca68e8fa0a\") " pod="openshift-dns-operator/dns-operator-744455d44c-bsbln" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.688280 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086-audit-dir\") pod \"apiserver-76f77b778f-b5p7b\" (UID: \"f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086\") " pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.688296 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kprw\" (UniqueName: \"kubernetes.io/projected/bd927d4d-c8dd-4c5b-a829-ab9b73ed7902-kube-api-access-2kprw\") pod \"ingress-canary-9668s\" (UID: \"bd927d4d-c8dd-4c5b-a829-ab9b73ed7902\") " pod="openshift-ingress-canary/ingress-canary-9668s" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.688312 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3ab75343-a016-48d3-930c-e391f08dc9e8-auth-proxy-config\") pod \"machine-config-operator-74547568cd-r462p\" (UID: \"3ab75343-a016-48d3-930c-e391f08dc9e8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r462p" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.688379 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/62dd3b62-846d-4fc2-aefe-2da8f53ac288-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-f2ljp\" (UID: \"62dd3b62-846d-4fc2-aefe-2da8f53ac288\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2ljp" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.688396 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/420c2489-d813-4281-8a38-ecc47f25b991-tmpfs\") pod \"packageserver-d55dfcdfc-4w6gw\" (UID: \"420c2489-d813-4281-8a38-ecc47f25b991\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w6gw" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.688413 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/813c1daf-b1ec-4d19-8cb9-39b9088bb3ff-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-l7559\" (UID: \"813c1daf-b1ec-4d19-8cb9-39b9088bb3ff\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l7559" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.688430 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swb6b\" (UniqueName: \"kubernetes.io/projected/813c1daf-b1ec-4d19-8cb9-39b9088bb3ff-kube-api-access-swb6b\") pod \"machine-config-controller-84d6567774-l7559\" (UID: \"813c1daf-b1ec-4d19-8cb9-39b9088bb3ff\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l7559" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.688466 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcnnf\" (UniqueName: \"kubernetes.io/projected/420c2489-d813-4281-8a38-ecc47f25b991-kube-api-access-bcnnf\") pod \"packageserver-d55dfcdfc-4w6gw\" (UID: \"420c2489-d813-4281-8a38-ecc47f25b991\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w6gw" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.690498 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.690549 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.690634 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e00367b-32b1-47e6-b543-82deeff3d190-config-volume\") pod \"dns-default-kfhx5\" (UID: \"0e00367b-32b1-47e6-b543-82deeff3d190\") " pod="openshift-dns/dns-default-kfhx5" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.690687 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f41cc81f-afdb-461f-a860-c8abbcb84633-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rsctk\" (UID: \"f41cc81f-afdb-461f-a860-c8abbcb84633\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rsctk" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.690789 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4dfe3823-e78b-4f62-b928-96b36d172dac-etcd-service-ca\") pod \"etcd-operator-b45778765-zrd6d\" (UID: \"4dfe3823-e78b-4f62-b928-96b36d172dac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrd6d" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.690831 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tldzc\" (UniqueName: \"kubernetes.io/projected/89c5cf4e-a301-48d2-8d31-056b6afc187b-kube-api-access-tldzc\") pod \"marketplace-operator-79b997595-mxhh7\" (UID: \"89c5cf4e-a301-48d2-8d31-056b6afc187b\") " pod="openshift-marketplace/marketplace-operator-79b997595-mxhh7" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.690882 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086-etcd-client\") pod \"apiserver-76f77b778f-b5p7b\" (UID: \"f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086\") " pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.690908 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.690938 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4dfe3823-e78b-4f62-b928-96b36d172dac-etcd-client\") pod \"etcd-operator-b45778765-zrd6d\" (UID: \"4dfe3823-e78b-4f62-b928-96b36d172dac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrd6d" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.691017 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4dfe3823-e78b-4f62-b928-96b36d172dac-etcd-ca\") pod \"etcd-operator-b45778765-zrd6d\" (UID: \"4dfe3823-e78b-4f62-b928-96b36d172dac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrd6d" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.691052 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xfl9\" (UniqueName: \"kubernetes.io/projected/28d91e3a-4d4d-42c7-8f03-d44e6e539e66-kube-api-access-2xfl9\") pod \"service-ca-9c57cc56f-8jqkt\" (UID: \"28d91e3a-4d4d-42c7-8f03-d44e6e539e66\") " pod="openshift-service-ca/service-ca-9c57cc56f-8jqkt" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.691109 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086-audit-dir\") pod \"apiserver-76f77b778f-b5p7b\" (UID: \"f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086\") " pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.695717 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmvrw\" (UniqueName: \"kubernetes.io/projected/96e56630-78f7-4f76-9d24-25ca68e8fa0a-kube-api-access-pmvrw\") pod \"dns-operator-744455d44c-bsbln\" (UID: \"96e56630-78f7-4f76-9d24-25ca68e8fa0a\") " pod="openshift-dns-operator/dns-operator-744455d44c-bsbln" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.695778 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086-node-pullsecrets\") pod \"apiserver-76f77b778f-b5p7b\" (UID: \"f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086\") " pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.695798 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086-audit\") pod \"apiserver-76f77b778f-b5p7b\" (UID: \"f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086\") " pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.695828 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0e00367b-32b1-47e6-b543-82deeff3d190-metrics-tls\") pod \"dns-default-kfhx5\" (UID: \"0e00367b-32b1-47e6-b543-82deeff3d190\") " pod="openshift-dns/dns-default-kfhx5" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.695872 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/30ea8030-3330-42dc-bc17-00a8058c7b04-metrics-certs\") pod \"router-default-5444994796-cmg4v\" (UID: \"30ea8030-3330-42dc-bc17-00a8058c7b04\") " pod="openshift-ingress/router-default-5444994796-cmg4v" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.695917 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk7k4\" (UniqueName: \"kubernetes.io/projected/a71943f3-8cb9-48ff-aaf0-38a30132ffd4-kube-api-access-nk7k4\") pod \"csi-hostpathplugin-ntk89\" (UID: \"a71943f3-8cb9-48ff-aaf0-38a30132ffd4\") " pod="hostpath-provisioner/csi-hostpathplugin-ntk89" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.696510 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4dfe3823-e78b-4f62-b928-96b36d172dac-etcd-ca\") pod \"etcd-operator-b45778765-zrd6d\" (UID: \"4dfe3823-e78b-4f62-b928-96b36d172dac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrd6d" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.697795 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4dfe3823-e78b-4f62-b928-96b36d172dac-etcd-service-ca\") pod \"etcd-operator-b45778765-zrd6d\" (UID: \"4dfe3823-e78b-4f62-b928-96b36d172dac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrd6d" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.699051 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086-audit\") pod \"apiserver-76f77b778f-b5p7b\" (UID: \"f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086\") " pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.700094 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086-node-pullsecrets\") pod \"apiserver-76f77b778f-b5p7b\" (UID: \"f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086\") " pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.700301 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.700584 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/420c2489-d813-4281-8a38-ecc47f25b991-webhook-cert\") pod \"packageserver-d55dfcdfc-4w6gw\" (UID: \"420c2489-d813-4281-8a38-ecc47f25b991\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w6gw" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.700645 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.700691 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f803b9c8-002a-46a8-9eca-6b074ef7bda3-srv-cert\") pod \"olm-operator-6b444d44fb-f2zf8\" (UID: \"f803b9c8-002a-46a8-9eca-6b074ef7bda3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f2zf8" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.700737 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086-trusted-ca-bundle\") pod \"apiserver-76f77b778f-b5p7b\" (UID: \"f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086\") " pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.700756 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-audit-dir\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.700830 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnfnl\" (UniqueName: \"kubernetes.io/projected/fdd47192-fa50-42cb-8679-fba3f93df36f-kube-api-access-wnfnl\") pod \"package-server-manager-789f6589d5-569hh\" (UID: \"fdd47192-fa50-42cb-8679-fba3f93df36f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-569hh" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.700916 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/1fcdf22f-c384-45cd-a738-a6ec8a9d181e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-84dtz\" (UID: \"1fcdf22f-c384-45cd-a738-a6ec8a9d181e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-84dtz" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.700954 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-audit-dir\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.702343 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086-trusted-ca-bundle\") pod \"apiserver-76f77b778f-b5p7b\" (UID: \"f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086\") " pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.703510 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcks8\" (UniqueName: \"kubernetes.io/projected/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-kube-api-access-xcks8\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.703538 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30ea8030-3330-42dc-bc17-00a8058c7b04-service-ca-bundle\") pod \"router-default-5444994796-cmg4v\" (UID: \"30ea8030-3330-42dc-bc17-00a8058c7b04\") " pod="openshift-ingress/router-default-5444994796-cmg4v" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.703576 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086-config\") pod \"apiserver-76f77b778f-b5p7b\" (UID: \"f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086\") " pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.703632 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.703696 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl8h4\" (UniqueName: \"kubernetes.io/projected/f803b9c8-002a-46a8-9eca-6b074ef7bda3-kube-api-access-vl8h4\") pod \"olm-operator-6b444d44fb-f2zf8\" (UID: \"f803b9c8-002a-46a8-9eca-6b074ef7bda3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f2zf8" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.703783 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.703811 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/616355ce-0d45-4eb0-bcf2-0e58f2fb7fcf-certs\") pod \"machine-config-server-tnfdb\" (UID: \"616355ce-0d45-4eb0-bcf2-0e58f2fb7fcf\") " pod="openshift-machine-config-operator/machine-config-server-tnfdb" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.703866 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/28d91e3a-4d4d-42c7-8f03-d44e6e539e66-signing-key\") pod \"service-ca-9c57cc56f-8jqkt\" (UID: \"28d91e3a-4d4d-42c7-8f03-d44e6e539e66\") " pod="openshift-service-ca/service-ca-9c57cc56f-8jqkt" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.703901 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3fefb44b-8bb5-44f8-be2b-4f0ada1e3899-config-volume\") pod \"collect-profiles-29524920-kwjxq\" (UID: \"3fefb44b-8bb5-44f8-be2b-4f0ada1e3899\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-kwjxq" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.704082 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086-config\") pod \"apiserver-76f77b778f-b5p7b\" (UID: \"f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086\") " pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.704176 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.704234 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fzsn\" (UniqueName: \"kubernetes.io/projected/30ea8030-3330-42dc-bc17-00a8058c7b04-kube-api-access-6fzsn\") pod \"router-default-5444994796-cmg4v\" (UID: \"30ea8030-3330-42dc-bc17-00a8058c7b04\") " pod="openshift-ingress/router-default-5444994796-cmg4v" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.704772 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.705080 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f803b9c8-002a-46a8-9eca-6b074ef7bda3-profile-collector-cert\") pod \"olm-operator-6b444d44fb-f2zf8\" (UID: \"f803b9c8-002a-46a8-9eca-6b074ef7bda3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f2zf8" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.705117 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.705175 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.705256 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a71943f3-8cb9-48ff-aaf0-38a30132ffd4-registration-dir\") pod \"csi-hostpathplugin-ntk89\" (UID: \"a71943f3-8cb9-48ff-aaf0-38a30132ffd4\") " pod="hostpath-provisioner/csi-hostpathplugin-ntk89" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.705322 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/89c5cf4e-a301-48d2-8d31-056b6afc187b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mxhh7\" (UID: \"89c5cf4e-a301-48d2-8d31-056b6afc187b\") " pod="openshift-marketplace/marketplace-operator-79b997595-mxhh7" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.705344 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvw2q\" (UniqueName: \"kubernetes.io/projected/0e00367b-32b1-47e6-b543-82deeff3d190-kube-api-access-kvw2q\") pod \"dns-default-kfhx5\" (UID: \"0e00367b-32b1-47e6-b543-82deeff3d190\") " pod="openshift-dns/dns-default-kfhx5" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.705362 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc5q9\" (UniqueName: \"kubernetes.io/projected/4eecb463-84f2-45b0-8281-be71a8ec27e9-kube-api-access-lc5q9\") pod \"service-ca-operator-777779d784-dz4cb\" (UID: \"4eecb463-84f2-45b0-8281-be71a8ec27e9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dz4cb" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.706241 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/30ea8030-3330-42dc-bc17-00a8058c7b04-default-certificate\") pod \"router-default-5444994796-cmg4v\" (UID: \"30ea8030-3330-42dc-bc17-00a8058c7b04\") " pod="openshift-ingress/router-default-5444994796-cmg4v" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.706305 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a71943f3-8cb9-48ff-aaf0-38a30132ffd4-csi-data-dir\") pod \"csi-hostpathplugin-ntk89\" (UID: \"a71943f3-8cb9-48ff-aaf0-38a30132ffd4\") " pod="hostpath-provisioner/csi-hostpathplugin-ntk89" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.706346 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086-encryption-config\") pod \"apiserver-76f77b778f-b5p7b\" (UID: \"f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086\") " pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.709310 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96e56630-78f7-4f76-9d24-25ca68e8fa0a-metrics-tls\") pod \"dns-operator-744455d44c-bsbln\" (UID: \"96e56630-78f7-4f76-9d24-25ca68e8fa0a\") " pod="openshift-dns-operator/dns-operator-744455d44c-bsbln" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.709608 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dfe3823-e78b-4f62-b928-96b36d172dac-config\") pod \"etcd-operator-b45778765-zrd6d\" (UID: \"4dfe3823-e78b-4f62-b928-96b36d172dac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrd6d" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.710413 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086-etcd-serving-ca\") pod \"apiserver-76f77b778f-b5p7b\" (UID: \"f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086\") " pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.711230 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.711338 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086-image-import-ca\") pod \"apiserver-76f77b778f-b5p7b\" (UID: \"f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086\") " pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.711442 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/28d91e3a-4d4d-42c7-8f03-d44e6e539e66-signing-cabundle\") pod \"service-ca-9c57cc56f-8jqkt\" (UID: \"28d91e3a-4d4d-42c7-8f03-d44e6e539e66\") " pod="openshift-service-ca/service-ca-9c57cc56f-8jqkt" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.711907 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086-etcd-serving-ca\") pod \"apiserver-76f77b778f-b5p7b\" (UID: \"f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086\") " pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.712290 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4dfe3823-e78b-4f62-b928-96b36d172dac-serving-cert\") pod \"etcd-operator-b45778765-zrd6d\" (UID: \"4dfe3823-e78b-4f62-b928-96b36d172dac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrd6d" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.712322 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a71943f3-8cb9-48ff-aaf0-38a30132ffd4-socket-dir\") pod \"csi-hostpathplugin-ntk89\" (UID: \"a71943f3-8cb9-48ff-aaf0-38a30132ffd4\") " pod="hostpath-provisioner/csi-hostpathplugin-ntk89" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.713134 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086-etcd-client\") pod \"apiserver-76f77b778f-b5p7b\" (UID: \"f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086\") " pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.713996 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086-encryption-config\") pod \"apiserver-76f77b778f-b5p7b\" (UID: \"f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086\") " pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.714043 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.714090 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.714137 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/813c1daf-b1ec-4d19-8cb9-39b9088bb3ff-proxy-tls\") pod \"machine-config-controller-84d6567774-l7559\" (UID: \"813c1daf-b1ec-4d19-8cb9-39b9088bb3ff\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l7559" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.714183 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a71943f3-8cb9-48ff-aaf0-38a30132ffd4-plugins-dir\") pod \"csi-hostpathplugin-ntk89\" (UID: \"a71943f3-8cb9-48ff-aaf0-38a30132ffd4\") " pod="hostpath-provisioner/csi-hostpathplugin-ntk89" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.714215 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxc8l\" (UniqueName: \"kubernetes.io/projected/f41cc81f-afdb-461f-a860-c8abbcb84633-kube-api-access-sxc8l\") pod \"kube-storage-version-migrator-operator-b67b599dd-rsctk\" (UID: \"f41cc81f-afdb-461f-a860-c8abbcb84633\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rsctk" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.714245 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk5gr\" (UniqueName: \"kubernetes.io/projected/616355ce-0d45-4eb0-bcf2-0e58f2fb7fcf-kube-api-access-jk5gr\") pod \"machine-config-server-tnfdb\" (UID: \"616355ce-0d45-4eb0-bcf2-0e58f2fb7fcf\") " pod="openshift-machine-config-operator/machine-config-server-tnfdb" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.714264 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62dd3b62-846d-4fc2-aefe-2da8f53ac288-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-f2ljp\" (UID: \"62dd3b62-846d-4fc2-aefe-2da8f53ac288\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2ljp" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.714292 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxfwj\" (UniqueName: \"kubernetes.io/projected/fab44d7c-1d10-4b1f-b173-b06fff08c286-kube-api-access-lxfwj\") pod \"migrator-59844c95c7-w5ffb\" (UID: \"fab44d7c-1d10-4b1f-b173-b06fff08c286\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w5ffb" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.714313 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/780ea2c0-bcb2-40bd-b6b8-c62cd877d72e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-c46m7\" (UID: \"780ea2c0-bcb2-40bd-b6b8-c62cd877d72e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c46m7" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.714330 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5bqb\" (UniqueName: \"kubernetes.io/projected/3ab75343-a016-48d3-930c-e391f08dc9e8-kube-api-access-b5bqb\") pod \"machine-config-operator-74547568cd-r462p\" (UID: \"3ab75343-a016-48d3-930c-e391f08dc9e8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r462p" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.714404 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd927d4d-c8dd-4c5b-a829-ab9b73ed7902-cert\") pod \"ingress-canary-9668s\" (UID: \"bd927d4d-c8dd-4c5b-a829-ab9b73ed7902\") " pod="openshift-ingress-canary/ingress-canary-9668s" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.714507 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3ab75343-a016-48d3-930c-e391f08dc9e8-images\") pod \"machine-config-operator-74547568cd-r462p\" (UID: \"3ab75343-a016-48d3-930c-e391f08dc9e8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r462p" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.714565 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086-serving-cert\") pod \"apiserver-76f77b778f-b5p7b\" (UID: \"f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086\") " pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.714739 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086-image-import-ca\") pod \"apiserver-76f77b778f-b5p7b\" (UID: \"f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086\") " pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.715722 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.716192 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/30ea8030-3330-42dc-bc17-00a8058c7b04-stats-auth\") pod \"router-default-5444994796-cmg4v\" (UID: \"30ea8030-3330-42dc-bc17-00a8058c7b04\") " pod="openshift-ingress/router-default-5444994796-cmg4v" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.716208 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.716214 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbn6x\" (UniqueName: \"kubernetes.io/projected/780ea2c0-bcb2-40bd-b6b8-c62cd877d72e-kube-api-access-sbn6x\") pod \"multus-admission-controller-857f4d67dd-c46m7\" (UID: \"780ea2c0-bcb2-40bd-b6b8-c62cd877d72e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c46m7" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.716249 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.716270 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h4n8\" (UniqueName: \"kubernetes.io/projected/f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086-kube-api-access-9h4n8\") pod \"apiserver-76f77b778f-b5p7b\" (UID: \"f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086\") " pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.716288 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h86wz\" (UniqueName: \"kubernetes.io/projected/4dfe3823-e78b-4f62-b928-96b36d172dac-kube-api-access-h86wz\") pod \"etcd-operator-b45778765-zrd6d\" (UID: \"4dfe3823-e78b-4f62-b928-96b36d172dac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrd6d" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.716367 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.716387 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4msns\" (UniqueName: \"kubernetes.io/projected/1fcdf22f-c384-45cd-a738-a6ec8a9d181e-kube-api-access-4msns\") pod \"control-plane-machine-set-operator-78cbb6b69f-84dtz\" (UID: \"1fcdf22f-c384-45cd-a738-a6ec8a9d181e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-84dtz" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.716523 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89c5cf4e-a301-48d2-8d31-056b6afc187b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mxhh7\" (UID: \"89c5cf4e-a301-48d2-8d31-056b6afc187b\") " pod="openshift-marketplace/marketplace-operator-79b997595-mxhh7" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.716555 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-audit-policies\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.717021 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-audit-policies\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.717770 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: E0219 10:10:50.718047 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:51.218030397 +0000 UTC m=+139.744495303 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.727487 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4dfe3823-e78b-4f62-b928-96b36d172dac-etcd-client\") pod \"etcd-operator-b45778765-zrd6d\" (UID: \"4dfe3823-e78b-4f62-b928-96b36d172dac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrd6d" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.727660 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dfe3823-e78b-4f62-b928-96b36d172dac-config\") pod \"etcd-operator-b45778765-zrd6d\" (UID: \"4dfe3823-e78b-4f62-b928-96b36d172dac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrd6d" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.733966 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.741933 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s595q"] Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.744395 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086-serving-cert\") pod \"apiserver-76f77b778f-b5p7b\" (UID: \"f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086\") " pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.747738 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zhzk5"] Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.749268 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4dfe3823-e78b-4f62-b928-96b36d172dac-serving-cert\") pod \"etcd-operator-b45778765-zrd6d\" (UID: \"4dfe3823-e78b-4f62-b928-96b36d172dac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrd6d" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.749595 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.750182 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kpjc5"] Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.750284 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.751959 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-blkmf"] Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.760377 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmvrw\" (UniqueName: \"kubernetes.io/projected/96e56630-78f7-4f76-9d24-25ca68e8fa0a-kube-api-access-pmvrw\") pod \"dns-operator-744455d44c-bsbln\" (UID: \"96e56630-78f7-4f76-9d24-25ca68e8fa0a\") " pod="openshift-dns-operator/dns-operator-744455d44c-bsbln" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.773755 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcks8\" (UniqueName: \"kubernetes.io/projected/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-kube-api-access-xcks8\") pod \"oauth-openshift-558db77b4-8wjls\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.798883 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6vsq7"] Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.800138 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h86wz\" (UniqueName: \"kubernetes.io/projected/4dfe3823-e78b-4f62-b928-96b36d172dac-kube-api-access-h86wz\") pod \"etcd-operator-b45778765-zrd6d\" (UID: \"4dfe3823-e78b-4f62-b928-96b36d172dac\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zrd6d" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.817020 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h4n8\" (UniqueName: \"kubernetes.io/projected/f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086-kube-api-access-9h4n8\") pod \"apiserver-76f77b778f-b5p7b\" (UID: \"f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086\") " pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.819716 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:50 crc kubenswrapper[4811]: E0219 10:10:50.819825 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:51.319797402 +0000 UTC m=+139.846262078 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.820515 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-87n5j"] Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.824264 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/28d91e3a-4d4d-42c7-8f03-d44e6e539e66-signing-cabundle\") pod \"service-ca-9c57cc56f-8jqkt\" (UID: \"28d91e3a-4d4d-42c7-8f03-d44e6e539e66\") " pod="openshift-service-ca/service-ca-9c57cc56f-8jqkt" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.824674 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a71943f3-8cb9-48ff-aaf0-38a30132ffd4-socket-dir\") pod \"csi-hostpathplugin-ntk89\" (UID: \"a71943f3-8cb9-48ff-aaf0-38a30132ffd4\") " pod="hostpath-provisioner/csi-hostpathplugin-ntk89" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.824968 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/813c1daf-b1ec-4d19-8cb9-39b9088bb3ff-proxy-tls\") pod \"machine-config-controller-84d6567774-l7559\" (UID: \"813c1daf-b1ec-4d19-8cb9-39b9088bb3ff\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l7559" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.825002 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk5gr\" (UniqueName: \"kubernetes.io/projected/616355ce-0d45-4eb0-bcf2-0e58f2fb7fcf-kube-api-access-jk5gr\") pod \"machine-config-server-tnfdb\" (UID: \"616355ce-0d45-4eb0-bcf2-0e58f2fb7fcf\") " pod="openshift-machine-config-operator/machine-config-server-tnfdb" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.825022 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62dd3b62-846d-4fc2-aefe-2da8f53ac288-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-f2ljp\" (UID: \"62dd3b62-846d-4fc2-aefe-2da8f53ac288\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2ljp" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.825042 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a71943f3-8cb9-48ff-aaf0-38a30132ffd4-plugins-dir\") pod \"csi-hostpathplugin-ntk89\" (UID: \"a71943f3-8cb9-48ff-aaf0-38a30132ffd4\") " pod="hostpath-provisioner/csi-hostpathplugin-ntk89" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.825062 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxc8l\" (UniqueName: \"kubernetes.io/projected/f41cc81f-afdb-461f-a860-c8abbcb84633-kube-api-access-sxc8l\") pod \"kube-storage-version-migrator-operator-b67b599dd-rsctk\" (UID: \"f41cc81f-afdb-461f-a860-c8abbcb84633\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rsctk" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.825087 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxfwj\" (UniqueName: \"kubernetes.io/projected/fab44d7c-1d10-4b1f-b173-b06fff08c286-kube-api-access-lxfwj\") pod \"migrator-59844c95c7-w5ffb\" (UID: \"fab44d7c-1d10-4b1f-b173-b06fff08c286\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w5ffb" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.825121 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/780ea2c0-bcb2-40bd-b6b8-c62cd877d72e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-c46m7\" (UID: \"780ea2c0-bcb2-40bd-b6b8-c62cd877d72e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c46m7" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.825640 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/28d91e3a-4d4d-42c7-8f03-d44e6e539e66-signing-cabundle\") pod \"service-ca-9c57cc56f-8jqkt\" (UID: \"28d91e3a-4d4d-42c7-8f03-d44e6e539e66\") " pod="openshift-service-ca/service-ca-9c57cc56f-8jqkt" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.826552 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5bqb\" (UniqueName: \"kubernetes.io/projected/3ab75343-a016-48d3-930c-e391f08dc9e8-kube-api-access-b5bqb\") pod \"machine-config-operator-74547568cd-r462p\" (UID: \"3ab75343-a016-48d3-930c-e391f08dc9e8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r462p" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.826599 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd927d4d-c8dd-4c5b-a829-ab9b73ed7902-cert\") pod \"ingress-canary-9668s\" (UID: \"bd927d4d-c8dd-4c5b-a829-ab9b73ed7902\") " pod="openshift-ingress-canary/ingress-canary-9668s" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.826623 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3ab75343-a016-48d3-930c-e391f08dc9e8-images\") pod \"machine-config-operator-74547568cd-r462p\" (UID: \"3ab75343-a016-48d3-930c-e391f08dc9e8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r462p" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.826678 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/30ea8030-3330-42dc-bc17-00a8058c7b04-stats-auth\") pod \"router-default-5444994796-cmg4v\" (UID: \"30ea8030-3330-42dc-bc17-00a8058c7b04\") " pod="openshift-ingress/router-default-5444994796-cmg4v" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.826701 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbn6x\" (UniqueName: \"kubernetes.io/projected/780ea2c0-bcb2-40bd-b6b8-c62cd877d72e-kube-api-access-sbn6x\") pod \"multus-admission-controller-857f4d67dd-c46m7\" (UID: \"780ea2c0-bcb2-40bd-b6b8-c62cd877d72e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c46m7" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.826696 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a71943f3-8cb9-48ff-aaf0-38a30132ffd4-socket-dir\") pod \"csi-hostpathplugin-ntk89\" (UID: \"a71943f3-8cb9-48ff-aaf0-38a30132ffd4\") " pod="hostpath-provisioner/csi-hostpathplugin-ntk89" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.826731 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4msns\" (UniqueName: \"kubernetes.io/projected/1fcdf22f-c384-45cd-a738-a6ec8a9d181e-kube-api-access-4msns\") pod \"control-plane-machine-set-operator-78cbb6b69f-84dtz\" (UID: \"1fcdf22f-c384-45cd-a738-a6ec8a9d181e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-84dtz" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.826782 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.826824 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89c5cf4e-a301-48d2-8d31-056b6afc187b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mxhh7\" (UID: \"89c5cf4e-a301-48d2-8d31-056b6afc187b\") " pod="openshift-marketplace/marketplace-operator-79b997595-mxhh7" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.826846 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62dd3b62-846d-4fc2-aefe-2da8f53ac288-config\") pod \"kube-apiserver-operator-766d6c64bb-f2ljp\" (UID: \"62dd3b62-846d-4fc2-aefe-2da8f53ac288\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2ljp" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.826878 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqvst\" (UniqueName: \"kubernetes.io/projected/3fefb44b-8bb5-44f8-be2b-4f0ada1e3899-kube-api-access-wqvst\") pod \"collect-profiles-29524920-kwjxq\" (UID: \"3fefb44b-8bb5-44f8-be2b-4f0ada1e3899\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-kwjxq" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.826920 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/420c2489-d813-4281-8a38-ecc47f25b991-apiservice-cert\") pod \"packageserver-d55dfcdfc-4w6gw\" (UID: \"420c2489-d813-4281-8a38-ecc47f25b991\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w6gw" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.826953 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eecb463-84f2-45b0-8281-be71a8ec27e9-serving-cert\") pod \"service-ca-operator-777779d784-dz4cb\" (UID: \"4eecb463-84f2-45b0-8281-be71a8ec27e9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dz4cb" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.826988 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f41cc81f-afdb-461f-a860-c8abbcb84633-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rsctk\" (UID: \"f41cc81f-afdb-461f-a860-c8abbcb84633\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rsctk" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.827018 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3fefb44b-8bb5-44f8-be2b-4f0ada1e3899-secret-volume\") pod \"collect-profiles-29524920-kwjxq\" (UID: \"3fefb44b-8bb5-44f8-be2b-4f0ada1e3899\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-kwjxq" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.827099 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a71943f3-8cb9-48ff-aaf0-38a30132ffd4-mountpoint-dir\") pod \"csi-hostpathplugin-ntk89\" (UID: \"a71943f3-8cb9-48ff-aaf0-38a30132ffd4\") " pod="hostpath-provisioner/csi-hostpathplugin-ntk89" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.827129 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3ab75343-a016-48d3-930c-e391f08dc9e8-proxy-tls\") pod \"machine-config-operator-74547568cd-r462p\" (UID: \"3ab75343-a016-48d3-930c-e391f08dc9e8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r462p" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.827162 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eecb463-84f2-45b0-8281-be71a8ec27e9-config\") pod \"service-ca-operator-777779d784-dz4cb\" (UID: \"4eecb463-84f2-45b0-8281-be71a8ec27e9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dz4cb" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.827195 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/fdd47192-fa50-42cb-8679-fba3f93df36f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-569hh\" (UID: \"fdd47192-fa50-42cb-8679-fba3f93df36f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-569hh" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.827241 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/616355ce-0d45-4eb0-bcf2-0e58f2fb7fcf-node-bootstrap-token\") pod \"machine-config-server-tnfdb\" (UID: \"616355ce-0d45-4eb0-bcf2-0e58f2fb7fcf\") " pod="openshift-machine-config-operator/machine-config-server-tnfdb" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.827274 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3ab75343-a016-48d3-930c-e391f08dc9e8-auth-proxy-config\") pod \"machine-config-operator-74547568cd-r462p\" (UID: \"3ab75343-a016-48d3-930c-e391f08dc9e8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r462p" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.827309 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kprw\" (UniqueName: \"kubernetes.io/projected/bd927d4d-c8dd-4c5b-a829-ab9b73ed7902-kube-api-access-2kprw\") pod \"ingress-canary-9668s\" (UID: \"bd927d4d-c8dd-4c5b-a829-ab9b73ed7902\") " pod="openshift-ingress-canary/ingress-canary-9668s" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.827441 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/420c2489-d813-4281-8a38-ecc47f25b991-tmpfs\") pod \"packageserver-d55dfcdfc-4w6gw\" (UID: \"420c2489-d813-4281-8a38-ecc47f25b991\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w6gw" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.827510 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/813c1daf-b1ec-4d19-8cb9-39b9088bb3ff-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-l7559\" (UID: \"813c1daf-b1ec-4d19-8cb9-39b9088bb3ff\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l7559" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.827555 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swb6b\" (UniqueName: \"kubernetes.io/projected/813c1daf-b1ec-4d19-8cb9-39b9088bb3ff-kube-api-access-swb6b\") pod \"machine-config-controller-84d6567774-l7559\" (UID: \"813c1daf-b1ec-4d19-8cb9-39b9088bb3ff\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l7559" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.827583 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/62dd3b62-846d-4fc2-aefe-2da8f53ac288-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-f2ljp\" (UID: \"62dd3b62-846d-4fc2-aefe-2da8f53ac288\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2ljp" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.827614 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcnnf\" (UniqueName: \"kubernetes.io/projected/420c2489-d813-4281-8a38-ecc47f25b991-kube-api-access-bcnnf\") pod \"packageserver-d55dfcdfc-4w6gw\" (UID: \"420c2489-d813-4281-8a38-ecc47f25b991\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w6gw" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.827672 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e00367b-32b1-47e6-b543-82deeff3d190-config-volume\") pod \"dns-default-kfhx5\" (UID: \"0e00367b-32b1-47e6-b543-82deeff3d190\") " pod="openshift-dns/dns-default-kfhx5" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.827708 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f41cc81f-afdb-461f-a860-c8abbcb84633-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rsctk\" (UID: \"f41cc81f-afdb-461f-a860-c8abbcb84633\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rsctk" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.827751 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tldzc\" (UniqueName: \"kubernetes.io/projected/89c5cf4e-a301-48d2-8d31-056b6afc187b-kube-api-access-tldzc\") pod \"marketplace-operator-79b997595-mxhh7\" (UID: \"89c5cf4e-a301-48d2-8d31-056b6afc187b\") " pod="openshift-marketplace/marketplace-operator-79b997595-mxhh7" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.827791 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xfl9\" (UniqueName: \"kubernetes.io/projected/28d91e3a-4d4d-42c7-8f03-d44e6e539e66-kube-api-access-2xfl9\") pod \"service-ca-9c57cc56f-8jqkt\" (UID: \"28d91e3a-4d4d-42c7-8f03-d44e6e539e66\") " pod="openshift-service-ca/service-ca-9c57cc56f-8jqkt" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.827842 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0e00367b-32b1-47e6-b543-82deeff3d190-metrics-tls\") pod \"dns-default-kfhx5\" (UID: \"0e00367b-32b1-47e6-b543-82deeff3d190\") " pod="openshift-dns/dns-default-kfhx5" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.827879 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/30ea8030-3330-42dc-bc17-00a8058c7b04-metrics-certs\") pod \"router-default-5444994796-cmg4v\" (UID: \"30ea8030-3330-42dc-bc17-00a8058c7b04\") " pod="openshift-ingress/router-default-5444994796-cmg4v" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.827912 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/420c2489-d813-4281-8a38-ecc47f25b991-webhook-cert\") pod \"packageserver-d55dfcdfc-4w6gw\" (UID: \"420c2489-d813-4281-8a38-ecc47f25b991\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w6gw" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.827944 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk7k4\" (UniqueName: \"kubernetes.io/projected/a71943f3-8cb9-48ff-aaf0-38a30132ffd4-kube-api-access-nk7k4\") pod \"csi-hostpathplugin-ntk89\" (UID: \"a71943f3-8cb9-48ff-aaf0-38a30132ffd4\") " pod="hostpath-provisioner/csi-hostpathplugin-ntk89" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.827976 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f803b9c8-002a-46a8-9eca-6b074ef7bda3-srv-cert\") pod \"olm-operator-6b444d44fb-f2zf8\" (UID: \"f803b9c8-002a-46a8-9eca-6b074ef7bda3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f2zf8" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.828027 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnfnl\" (UniqueName: \"kubernetes.io/projected/fdd47192-fa50-42cb-8679-fba3f93df36f-kube-api-access-wnfnl\") pod \"package-server-manager-789f6589d5-569hh\" (UID: \"fdd47192-fa50-42cb-8679-fba3f93df36f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-569hh" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.828424 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/1fcdf22f-c384-45cd-a738-a6ec8a9d181e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-84dtz\" (UID: \"1fcdf22f-c384-45cd-a738-a6ec8a9d181e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-84dtz" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.828486 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30ea8030-3330-42dc-bc17-00a8058c7b04-service-ca-bundle\") pod \"router-default-5444994796-cmg4v\" (UID: \"30ea8030-3330-42dc-bc17-00a8058c7b04\") " pod="openshift-ingress/router-default-5444994796-cmg4v" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.828514 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl8h4\" (UniqueName: \"kubernetes.io/projected/f803b9c8-002a-46a8-9eca-6b074ef7bda3-kube-api-access-vl8h4\") pod \"olm-operator-6b444d44fb-f2zf8\" (UID: \"f803b9c8-002a-46a8-9eca-6b074ef7bda3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f2zf8" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.828557 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/616355ce-0d45-4eb0-bcf2-0e58f2fb7fcf-certs\") pod \"machine-config-server-tnfdb\" (UID: \"616355ce-0d45-4eb0-bcf2-0e58f2fb7fcf\") " pod="openshift-machine-config-operator/machine-config-server-tnfdb" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.828580 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/28d91e3a-4d4d-42c7-8f03-d44e6e539e66-signing-key\") pod \"service-ca-9c57cc56f-8jqkt\" (UID: \"28d91e3a-4d4d-42c7-8f03-d44e6e539e66\") " pod="openshift-service-ca/service-ca-9c57cc56f-8jqkt" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.828610 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3fefb44b-8bb5-44f8-be2b-4f0ada1e3899-config-volume\") pod \"collect-profiles-29524920-kwjxq\" (UID: \"3fefb44b-8bb5-44f8-be2b-4f0ada1e3899\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-kwjxq" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.828641 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fzsn\" (UniqueName: \"kubernetes.io/projected/30ea8030-3330-42dc-bc17-00a8058c7b04-kube-api-access-6fzsn\") pod \"router-default-5444994796-cmg4v\" (UID: \"30ea8030-3330-42dc-bc17-00a8058c7b04\") " pod="openshift-ingress/router-default-5444994796-cmg4v" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.828678 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f803b9c8-002a-46a8-9eca-6b074ef7bda3-profile-collector-cert\") pod \"olm-operator-6b444d44fb-f2zf8\" (UID: \"f803b9c8-002a-46a8-9eca-6b074ef7bda3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f2zf8" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.828737 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a71943f3-8cb9-48ff-aaf0-38a30132ffd4-registration-dir\") pod \"csi-hostpathplugin-ntk89\" (UID: \"a71943f3-8cb9-48ff-aaf0-38a30132ffd4\") " pod="hostpath-provisioner/csi-hostpathplugin-ntk89" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.828766 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/89c5cf4e-a301-48d2-8d31-056b6afc187b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mxhh7\" (UID: \"89c5cf4e-a301-48d2-8d31-056b6afc187b\") " pod="openshift-marketplace/marketplace-operator-79b997595-mxhh7" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.828797 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvw2q\" (UniqueName: \"kubernetes.io/projected/0e00367b-32b1-47e6-b543-82deeff3d190-kube-api-access-kvw2q\") pod \"dns-default-kfhx5\" (UID: \"0e00367b-32b1-47e6-b543-82deeff3d190\") " pod="openshift-dns/dns-default-kfhx5" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.828825 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc5q9\" (UniqueName: \"kubernetes.io/projected/4eecb463-84f2-45b0-8281-be71a8ec27e9-kube-api-access-lc5q9\") pod \"service-ca-operator-777779d784-dz4cb\" (UID: \"4eecb463-84f2-45b0-8281-be71a8ec27e9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dz4cb" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.828868 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/30ea8030-3330-42dc-bc17-00a8058c7b04-default-certificate\") pod \"router-default-5444994796-cmg4v\" (UID: \"30ea8030-3330-42dc-bc17-00a8058c7b04\") " pod="openshift-ingress/router-default-5444994796-cmg4v" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.828897 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a71943f3-8cb9-48ff-aaf0-38a30132ffd4-csi-data-dir\") pod \"csi-hostpathplugin-ntk89\" (UID: \"a71943f3-8cb9-48ff-aaf0-38a30132ffd4\") " pod="hostpath-provisioner/csi-hostpathplugin-ntk89" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.829008 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a71943f3-8cb9-48ff-aaf0-38a30132ffd4-plugins-dir\") pod \"csi-hostpathplugin-ntk89\" (UID: \"a71943f3-8cb9-48ff-aaf0-38a30132ffd4\") " pod="hostpath-provisioner/csi-hostpathplugin-ntk89" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.829622 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a71943f3-8cb9-48ff-aaf0-38a30132ffd4-csi-data-dir\") pod \"csi-hostpathplugin-ntk89\" (UID: \"a71943f3-8cb9-48ff-aaf0-38a30132ffd4\") " pod="hostpath-provisioner/csi-hostpathplugin-ntk89" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.830656 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3ab75343-a016-48d3-930c-e391f08dc9e8-images\") pod \"machine-config-operator-74547568cd-r462p\" (UID: \"3ab75343-a016-48d3-930c-e391f08dc9e8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r462p" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.830711 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/813c1daf-b1ec-4d19-8cb9-39b9088bb3ff-proxy-tls\") pod \"machine-config-controller-84d6567774-l7559\" (UID: \"813c1daf-b1ec-4d19-8cb9-39b9088bb3ff\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l7559" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.830865 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a71943f3-8cb9-48ff-aaf0-38a30132ffd4-registration-dir\") pod \"csi-hostpathplugin-ntk89\" (UID: \"a71943f3-8cb9-48ff-aaf0-38a30132ffd4\") " pod="hostpath-provisioner/csi-hostpathplugin-ntk89" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.831119 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f41cc81f-afdb-461f-a860-c8abbcb84633-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rsctk\" (UID: \"f41cc81f-afdb-461f-a860-c8abbcb84633\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rsctk" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.832152 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd927d4d-c8dd-4c5b-a829-ab9b73ed7902-cert\") pod \"ingress-canary-9668s\" (UID: \"bd927d4d-c8dd-4c5b-a829-ab9b73ed7902\") " pod="openshift-ingress-canary/ingress-canary-9668s" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.832772 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vxgv9"] Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.833352 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89c5cf4e-a301-48d2-8d31-056b6afc187b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mxhh7\" (UID: \"89c5cf4e-a301-48d2-8d31-056b6afc187b\") " pod="openshift-marketplace/marketplace-operator-79b997595-mxhh7" Feb 19 10:10:50 crc kubenswrapper[4811]: E0219 10:10:50.834553 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:51.334531271 +0000 UTC m=+139.860995957 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.835393 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/30ea8030-3330-42dc-bc17-00a8058c7b04-stats-auth\") pod \"router-default-5444994796-cmg4v\" (UID: \"30ea8030-3330-42dc-bc17-00a8058c7b04\") " pod="openshift-ingress/router-default-5444994796-cmg4v" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.835785 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/420c2489-d813-4281-8a38-ecc47f25b991-tmpfs\") pod \"packageserver-d55dfcdfc-4w6gw\" (UID: \"420c2489-d813-4281-8a38-ecc47f25b991\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w6gw" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.835821 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f803b9c8-002a-46a8-9eca-6b074ef7bda3-srv-cert\") pod \"olm-operator-6b444d44fb-f2zf8\" (UID: \"f803b9c8-002a-46a8-9eca-6b074ef7bda3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f2zf8" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.837654 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62dd3b62-846d-4fc2-aefe-2da8f53ac288-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-f2ljp\" (UID: \"62dd3b62-846d-4fc2-aefe-2da8f53ac288\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2ljp" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.838649 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3fefb44b-8bb5-44f8-be2b-4f0ada1e3899-secret-volume\") pod \"collect-profiles-29524920-kwjxq\" (UID: \"3fefb44b-8bb5-44f8-be2b-4f0ada1e3899\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-kwjxq" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.838687 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/30ea8030-3330-42dc-bc17-00a8058c7b04-metrics-certs\") pod \"router-default-5444994796-cmg4v\" (UID: \"30ea8030-3330-42dc-bc17-00a8058c7b04\") " pod="openshift-ingress/router-default-5444994796-cmg4v" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.840981 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/89c5cf4e-a301-48d2-8d31-056b6afc187b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mxhh7\" (UID: \"89c5cf4e-a301-48d2-8d31-056b6afc187b\") " pod="openshift-marketplace/marketplace-operator-79b997595-mxhh7" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.841024 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eecb463-84f2-45b0-8281-be71a8ec27e9-config\") pod \"service-ca-operator-777779d784-dz4cb\" (UID: \"4eecb463-84f2-45b0-8281-be71a8ec27e9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dz4cb" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.841531 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/420c2489-d813-4281-8a38-ecc47f25b991-webhook-cert\") pod \"packageserver-d55dfcdfc-4w6gw\" (UID: \"420c2489-d813-4281-8a38-ecc47f25b991\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w6gw" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.841735 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a71943f3-8cb9-48ff-aaf0-38a30132ffd4-mountpoint-dir\") pod \"csi-hostpathplugin-ntk89\" (UID: \"a71943f3-8cb9-48ff-aaf0-38a30132ffd4\") " pod="hostpath-provisioner/csi-hostpathplugin-ntk89" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.842483 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/fdd47192-fa50-42cb-8679-fba3f93df36f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-569hh\" (UID: \"fdd47192-fa50-42cb-8679-fba3f93df36f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-569hh" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.842584 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/813c1daf-b1ec-4d19-8cb9-39b9088bb3ff-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-l7559\" (UID: \"813c1daf-b1ec-4d19-8cb9-39b9088bb3ff\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l7559" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.843069 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e00367b-32b1-47e6-b543-82deeff3d190-config-volume\") pod \"dns-default-kfhx5\" (UID: \"0e00367b-32b1-47e6-b543-82deeff3d190\") " pod="openshift-dns/dns-default-kfhx5" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.843081 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0e00367b-32b1-47e6-b543-82deeff3d190-metrics-tls\") pod \"dns-default-kfhx5\" (UID: \"0e00367b-32b1-47e6-b543-82deeff3d190\") " pod="openshift-dns/dns-default-kfhx5" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.843752 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3ab75343-a016-48d3-930c-e391f08dc9e8-auth-proxy-config\") pod \"machine-config-operator-74547568cd-r462p\" (UID: \"3ab75343-a016-48d3-930c-e391f08dc9e8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r462p" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.844276 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30ea8030-3330-42dc-bc17-00a8058c7b04-service-ca-bundle\") pod \"router-default-5444994796-cmg4v\" (UID: \"30ea8030-3330-42dc-bc17-00a8058c7b04\") " pod="openshift-ingress/router-default-5444994796-cmg4v" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.844581 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62dd3b62-846d-4fc2-aefe-2da8f53ac288-config\") pod \"kube-apiserver-operator-766d6c64bb-f2ljp\" (UID: \"62dd3b62-846d-4fc2-aefe-2da8f53ac288\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2ljp" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.844939 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/616355ce-0d45-4eb0-bcf2-0e58f2fb7fcf-node-bootstrap-token\") pod \"machine-config-server-tnfdb\" (UID: \"616355ce-0d45-4eb0-bcf2-0e58f2fb7fcf\") " pod="openshift-machine-config-operator/machine-config-server-tnfdb" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.845157 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3fefb44b-8bb5-44f8-be2b-4f0ada1e3899-config-volume\") pod \"collect-profiles-29524920-kwjxq\" (UID: \"3fefb44b-8bb5-44f8-be2b-4f0ada1e3899\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-kwjxq" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.845524 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eecb463-84f2-45b0-8281-be71a8ec27e9-serving-cert\") pod \"service-ca-operator-777779d784-dz4cb\" (UID: \"4eecb463-84f2-45b0-8281-be71a8ec27e9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dz4cb" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.845592 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/28d91e3a-4d4d-42c7-8f03-d44e6e539e66-signing-key\") pod \"service-ca-9c57cc56f-8jqkt\" (UID: \"28d91e3a-4d4d-42c7-8f03-d44e6e539e66\") " pod="openshift-service-ca/service-ca-9c57cc56f-8jqkt" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.848147 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/780ea2c0-bcb2-40bd-b6b8-c62cd877d72e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-c46m7\" (UID: \"780ea2c0-bcb2-40bd-b6b8-c62cd877d72e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c46m7" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.848749 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3ab75343-a016-48d3-930c-e391f08dc9e8-proxy-tls\") pod \"machine-config-operator-74547568cd-r462p\" (UID: \"3ab75343-a016-48d3-930c-e391f08dc9e8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r462p" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.849419 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/30ea8030-3330-42dc-bc17-00a8058c7b04-default-certificate\") pod \"router-default-5444994796-cmg4v\" (UID: \"30ea8030-3330-42dc-bc17-00a8058c7b04\") " pod="openshift-ingress/router-default-5444994796-cmg4v" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.852087 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/616355ce-0d45-4eb0-bcf2-0e58f2fb7fcf-certs\") pod \"machine-config-server-tnfdb\" (UID: \"616355ce-0d45-4eb0-bcf2-0e58f2fb7fcf\") " pod="openshift-machine-config-operator/machine-config-server-tnfdb" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.853283 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f803b9c8-002a-46a8-9eca-6b074ef7bda3-profile-collector-cert\") pod \"olm-operator-6b444d44fb-f2zf8\" (UID: \"f803b9c8-002a-46a8-9eca-6b074ef7bda3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f2zf8" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.859845 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/1fcdf22f-c384-45cd-a738-a6ec8a9d181e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-84dtz\" (UID: \"1fcdf22f-c384-45cd-a738-a6ec8a9d181e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-84dtz" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.861407 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/420c2489-d813-4281-8a38-ecc47f25b991-apiservice-cert\") pod \"packageserver-d55dfcdfc-4w6gw\" (UID: \"420c2489-d813-4281-8a38-ecc47f25b991\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w6gw" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.867184 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f41cc81f-afdb-461f-a860-c8abbcb84633-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rsctk\" (UID: \"f41cc81f-afdb-461f-a860-c8abbcb84633\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rsctk" Feb 19 10:10:50 crc kubenswrapper[4811]: W0219 10:10:50.876345 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27e6ed8e_63f7_43a4_b162_3edc3c30c442.slice/crio-45d4c83e7fa0d462bb675224d80659828dc8401b53de930b8d967ad7dca7f88d WatchSource:0}: Error finding container 45d4c83e7fa0d462bb675224d80659828dc8401b53de930b8d967ad7dca7f88d: Status 404 returned error can't find the container with id 45d4c83e7fa0d462bb675224d80659828dc8401b53de930b8d967ad7dca7f88d Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.888937 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5bqb\" (UniqueName: \"kubernetes.io/projected/3ab75343-a016-48d3-930c-e391f08dc9e8-kube-api-access-b5bqb\") pod \"machine-config-operator-74547568cd-r462p\" (UID: \"3ab75343-a016-48d3-930c-e391f08dc9e8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r462p" Feb 19 10:10:50 crc kubenswrapper[4811]: W0219 10:10:50.890872 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e93440f_9668_477a_8df8_a8da4d088cc8.slice/crio-993f055c71766e5b8338e752de10b78ef85eb7eed6400fdfdfd319368a6d8329 WatchSource:0}: Error finding container 993f055c71766e5b8338e752de10b78ef85eb7eed6400fdfdfd319368a6d8329: Status 404 returned error can't find the container with id 993f055c71766e5b8338e752de10b78ef85eb7eed6400fdfdfd319368a6d8329 Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.897947 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxc8l\" (UniqueName: \"kubernetes.io/projected/f41cc81f-afdb-461f-a860-c8abbcb84633-kube-api-access-sxc8l\") pod \"kube-storage-version-migrator-operator-b67b599dd-rsctk\" (UID: \"f41cc81f-afdb-461f-a860-c8abbcb84633\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rsctk" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.922703 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk5gr\" (UniqueName: \"kubernetes.io/projected/616355ce-0d45-4eb0-bcf2-0e58f2fb7fcf-kube-api-access-jk5gr\") pod \"machine-config-server-tnfdb\" (UID: \"616355ce-0d45-4eb0-bcf2-0e58f2fb7fcf\") " pod="openshift-machine-config-operator/machine-config-server-tnfdb" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.929286 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bbw9j"] Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.930484 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:50 crc kubenswrapper[4811]: E0219 10:10:50.931377 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:51.431351999 +0000 UTC m=+139.957816685 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.940470 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxfwj\" (UniqueName: \"kubernetes.io/projected/fab44d7c-1d10-4b1f-b173-b06fff08c286-kube-api-access-lxfwj\") pod \"migrator-59844c95c7-w5ffb\" (UID: \"fab44d7c-1d10-4b1f-b173-b06fff08c286\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w5ffb" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.946509 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-bsbln" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.962162 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4msns\" (UniqueName: \"kubernetes.io/projected/1fcdf22f-c384-45cd-a738-a6ec8a9d181e-kube-api-access-4msns\") pod \"control-plane-machine-set-operator-78cbb6b69f-84dtz\" (UID: \"1fcdf22f-c384-45cd-a738-a6ec8a9d181e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-84dtz" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.972103 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-zrd6d" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.985244 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbn6x\" (UniqueName: \"kubernetes.io/projected/780ea2c0-bcb2-40bd-b6b8-c62cd877d72e-kube-api-access-sbn6x\") pod \"multus-admission-controller-857f4d67dd-c46m7\" (UID: \"780ea2c0-bcb2-40bd-b6b8-c62cd877d72e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c46m7" Feb 19 10:10:50 crc kubenswrapper[4811]: I0219 10:10:50.989282 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" Feb 19 10:10:50 crc kubenswrapper[4811]: W0219 10:10:50.991552 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd19195ba_d354_4a69_bc60_364ba77eead8.slice/crio-f497b54464b1176997ee6bbddcc77a27cc839d5b200f2d31bab68491fac1c927 WatchSource:0}: Error finding container f497b54464b1176997ee6bbddcc77a27cc839d5b200f2d31bab68491fac1c927: Status 404 returned error can't find the container with id f497b54464b1176997ee6bbddcc77a27cc839d5b200f2d31bab68491fac1c927 Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.011338 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.016443 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvw2q\" (UniqueName: \"kubernetes.io/projected/0e00367b-32b1-47e6-b543-82deeff3d190-kube-api-access-kvw2q\") pod \"dns-default-kfhx5\" (UID: \"0e00367b-32b1-47e6-b543-82deeff3d190\") " pod="openshift-dns/dns-default-kfhx5" Feb 19 10:10:51 crc kubenswrapper[4811]: E0219 10:10:51.038697 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:51.538680235 +0000 UTC m=+140.065144921 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.039275 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.053709 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc5q9\" (UniqueName: \"kubernetes.io/projected/4eecb463-84f2-45b0-8281-be71a8ec27e9-kube-api-access-lc5q9\") pod \"service-ca-operator-777779d784-dz4cb\" (UID: \"4eecb463-84f2-45b0-8281-be71a8ec27e9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-dz4cb" Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.076134 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-84dtz" Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.085262 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w5ffb" Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.097157 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcnnf\" (UniqueName: \"kubernetes.io/projected/420c2489-d813-4281-8a38-ecc47f25b991-kube-api-access-bcnnf\") pod \"packageserver-d55dfcdfc-4w6gw\" (UID: \"420c2489-d813-4281-8a38-ecc47f25b991\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w6gw" Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.099429 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnfnl\" (UniqueName: \"kubernetes.io/projected/fdd47192-fa50-42cb-8679-fba3f93df36f-kube-api-access-wnfnl\") pod \"package-server-manager-789f6589d5-569hh\" (UID: \"fdd47192-fa50-42cb-8679-fba3f93df36f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-569hh" Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.101622 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-dz4cb" Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.107411 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk7k4\" (UniqueName: \"kubernetes.io/projected/a71943f3-8cb9-48ff-aaf0-38a30132ffd4-kube-api-access-nk7k4\") pod \"csi-hostpathplugin-ntk89\" (UID: \"a71943f3-8cb9-48ff-aaf0-38a30132ffd4\") " pod="hostpath-provisioner/csi-hostpathplugin-ntk89" Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.115144 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-569hh" Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.135150 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r462p" Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.137880 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl8h4\" (UniqueName: \"kubernetes.io/projected/f803b9c8-002a-46a8-9eca-6b074ef7bda3-kube-api-access-vl8h4\") pod \"olm-operator-6b444d44fb-f2zf8\" (UID: \"f803b9c8-002a-46a8-9eca-6b074ef7bda3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f2zf8" Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.143435 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.144154 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w6gw" Feb 19 10:10:51 crc kubenswrapper[4811]: E0219 10:10:51.144320 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:51.644298751 +0000 UTC m=+140.170763437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.155797 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-c46m7" Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.158685 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kprw\" (UniqueName: \"kubernetes.io/projected/bd927d4d-c8dd-4c5b-a829-ab9b73ed7902-kube-api-access-2kprw\") pod \"ingress-canary-9668s\" (UID: \"bd927d4d-c8dd-4c5b-a829-ab9b73ed7902\") " pod="openshift-ingress-canary/ingress-canary-9668s" Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.161586 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fzsn\" (UniqueName: \"kubernetes.io/projected/30ea8030-3330-42dc-bc17-00a8058c7b04-kube-api-access-6fzsn\") pod \"router-default-5444994796-cmg4v\" (UID: \"30ea8030-3330-42dc-bc17-00a8058c7b04\") " pod="openshift-ingress/router-default-5444994796-cmg4v" Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.168130 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rsctk" Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.199935 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-kfhx5" Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.200615 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqvst\" (UniqueName: \"kubernetes.io/projected/3fefb44b-8bb5-44f8-be2b-4f0ada1e3899-kube-api-access-wqvst\") pod \"collect-profiles-29524920-kwjxq\" (UID: \"3fefb44b-8bb5-44f8-be2b-4f0ada1e3899\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-kwjxq" Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.202772 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-tnfdb" Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.205844 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swb6b\" (UniqueName: \"kubernetes.io/projected/813c1daf-b1ec-4d19-8cb9-39b9088bb3ff-kube-api-access-swb6b\") pod \"machine-config-controller-84d6567774-l7559\" (UID: \"813c1daf-b1ec-4d19-8cb9-39b9088bb3ff\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l7559" Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.206822 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tldzc\" (UniqueName: \"kubernetes.io/projected/89c5cf4e-a301-48d2-8d31-056b6afc187b-kube-api-access-tldzc\") pod \"marketplace-operator-79b997595-mxhh7\" (UID: \"89c5cf4e-a301-48d2-8d31-056b6afc187b\") " pod="openshift-marketplace/marketplace-operator-79b997595-mxhh7" Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.208175 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9668s" Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.221050 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/62dd3b62-846d-4fc2-aefe-2da8f53ac288-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-f2ljp\" (UID: \"62dd3b62-846d-4fc2-aefe-2da8f53ac288\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2ljp" Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.234113 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-ntk89" Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.244384 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xfl9\" (UniqueName: \"kubernetes.io/projected/28d91e3a-4d4d-42c7-8f03-d44e6e539e66-kube-api-access-2xfl9\") pod \"service-ca-9c57cc56f-8jqkt\" (UID: \"28d91e3a-4d4d-42c7-8f03-d44e6e539e66\") " pod="openshift-service-ca/service-ca-9c57cc56f-8jqkt" Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.246378 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:51 crc kubenswrapper[4811]: E0219 10:10:51.247370 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:51.747341176 +0000 UTC m=+140.273806022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.359558 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:51 crc kubenswrapper[4811]: E0219 10:10:51.360047 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:51.860010209 +0000 UTC m=+140.386474895 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.360629 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:51 crc kubenswrapper[4811]: E0219 10:10:51.360967 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:51.860959622 +0000 UTC m=+140.387424308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.370604 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mxhh7" Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.393831 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f2zf8" Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.418279 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-cmg4v" Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.426851 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-kwjxq" Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.459251 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-8jqkt" Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.462196 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:51 crc kubenswrapper[4811]: E0219 10:10:51.463425 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:51.963402512 +0000 UTC m=+140.489867208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.463596 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:51 crc kubenswrapper[4811]: E0219 10:10:51.468887 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:51.968871542 +0000 UTC m=+140.495336228 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.477050 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l7559" Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.484903 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2ljp" Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.570656 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:51 crc kubenswrapper[4811]: E0219 10:10:51.571101 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:52.071077227 +0000 UTC m=+140.597541913 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.671842 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:51 crc kubenswrapper[4811]: E0219 10:10:51.672311 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:52.172297479 +0000 UTC m=+140.698762165 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.747892 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2jmvs" event={"ID":"d25a9c9a-efcd-4520-9f14-f0de4d9ed485","Type":"ContainerStarted","Data":"b5d99c612817223aea4a8da0ea719e49358d22f8048395e0a7689a244105d8d6"} Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.751302 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2jmvs" event={"ID":"d25a9c9a-efcd-4520-9f14-f0de4d9ed485","Type":"ContainerStarted","Data":"08bd556e69e26d75134f264a91628971f27e3e44f6329a05a043c719d6a9f1ca"} Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.766618 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kpjc5" event={"ID":"9a6532e9-cca3-47cf-89e4-77fac53a5d18","Type":"ContainerStarted","Data":"907b2ebb7ea0d0be849fa47538cc5540663702d343954c3b078194626a78ba14"} Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.766742 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kpjc5" event={"ID":"9a6532e9-cca3-47cf-89e4-77fac53a5d18","Type":"ContainerStarted","Data":"ddbae4ea92da1fd7db167c18581b9d00a3f66b72d476e4e4d06dd0ff3e3b5bca"} Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.766780 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kpjc5" Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.769842 4811 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-kpjc5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.769910 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kpjc5" podUID="9a6532e9-cca3-47cf-89e4-77fac53a5d18" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.777396 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:51 crc kubenswrapper[4811]: E0219 10:10:51.778301 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:52.278270453 +0000 UTC m=+140.804735139 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.787302 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zrrrs" event={"ID":"37172e17-0240-46d9-878a-f396d66ea307","Type":"ContainerStarted","Data":"5fcefd77dc6b12901780e8596622f468bb49bfa98415a237f6663f4a634b2777"} Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.816077 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nx5d7" event={"ID":"94d45853-c52b-46a4-b27d-b4d33b7527d9","Type":"ContainerStarted","Data":"7993438e4e6c8a589d3466e3277a8ac7519c84461722265fbd43c03649243fe9"} Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.840038 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vxgv9" event={"ID":"0b2a3a7d-590d-410c-a4b7-6e7a7729fc3d","Type":"ContainerStarted","Data":"5ad5ab46bf174dde2072ab45c4ba912d8162eb559d5036a36c9d7b6285d72ada"} Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.873029 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-87n5j" event={"ID":"27e6ed8e-63f7-43a4-b162-3edc3c30c442","Type":"ContainerStarted","Data":"d4abe39a8707a6811133c2b5287a3097fbc020b300594dba0e01b63db429bdbc"} Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.873124 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-87n5j" event={"ID":"27e6ed8e-63f7-43a4-b162-3edc3c30c442","Type":"ContainerStarted","Data":"45d4c83e7fa0d462bb675224d80659828dc8401b53de930b8d967ad7dca7f88d"} Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.880190 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8wjls"] Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.882326 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:51 crc kubenswrapper[4811]: E0219 10:10:51.884746 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:52.384685818 +0000 UTC m=+140.911150504 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.892737 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-d8j77" event={"ID":"0c629b88-bf23-410a-903e-ae58dffc3bad","Type":"ContainerStarted","Data":"e1fb1f09a3f6b802569177a0f29638f36b3002637b2ab87efa490667e1ad8653"} Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.896838 4811 patch_prober.go:28] interesting pod/console-operator-58897d9998-d8j77 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.896917 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-d8j77" podUID="0c629b88-bf23-410a-903e-ae58dffc3bad" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.899036 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6vsq7" event={"ID":"5e361b2e-edcc-418f-861e-83b65fc2634a","Type":"ContainerStarted","Data":"ae91dc4b861db23dc2c4ca66b11a0385bc17e0e29f5ca9ac52ec371e4b9ee0c8"} Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.926339 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zhzk5" event={"ID":"25c5713b-847b-4f3f-9a76-2ded118334b3","Type":"ContainerStarted","Data":"c925ce8af0435cbdfce7871670cc9a8e9d2869100cdd8e7023fe3ab75fab58ac"} Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.929322 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zhzk5" event={"ID":"25c5713b-847b-4f3f-9a76-2ded118334b3","Type":"ContainerStarted","Data":"13fb82fb2822ffdd56521dcda358d314dc9967dbbda9d524b2a7404b252f3115"} Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.958828 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-bsbln"] Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.964730 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s595q" event={"ID":"65264825-0c26-4bac-bac5-05966eddd111","Type":"ContainerStarted","Data":"ab1a150ae58ad231bbca8e18716e1585877f0ef765fdeb78bb61fda031a9d078"} Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.964819 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s595q" event={"ID":"65264825-0c26-4bac-bac5-05966eddd111","Type":"ContainerStarted","Data":"8e26a18425db2c0cba4d22e9a30a9045e8af031c70599d45e0b57a1926f05265"} Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.967578 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s595q" Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.985761 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:51 crc kubenswrapper[4811]: E0219 10:10:51.986137 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:52.486100994 +0000 UTC m=+141.012565830 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.986278 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.987867 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-blkmf" event={"ID":"5e93440f-9668-477a-8df8-a8da4d088cc8","Type":"ContainerStarted","Data":"993f055c71766e5b8338e752de10b78ef85eb7eed6400fdfdfd319368a6d8329"} Feb 19 10:10:51 crc kubenswrapper[4811]: E0219 10:10:51.988293 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:52.488263036 +0000 UTC m=+141.014727932 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.993868 4811 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-s595q container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.993927 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s595q" podUID="65264825-0c26-4bac-bac5-05966eddd111" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Feb 19 10:10:51 crc kubenswrapper[4811]: I0219 10:10:51.995652 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bbw9j" event={"ID":"d19195ba-d354-4a69-bc60-364ba77eead8","Type":"ContainerStarted","Data":"f497b54464b1176997ee6bbddcc77a27cc839d5b200f2d31bab68491fac1c927"} Feb 19 10:10:52 crc kubenswrapper[4811]: I0219 10:10:52.010503 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8j7vw" event={"ID":"889906af-3050-48fa-816a-7f0631f0ce47","Type":"ContainerStarted","Data":"d6202c76a3ec947604e7378b4375a876b6ae67d3168deb94cc50565e58b25292"} Feb 19 10:10:52 crc kubenswrapper[4811]: I0219 10:10:52.010810 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8j7vw" Feb 19 10:10:52 crc kubenswrapper[4811]: I0219 10:10:52.075062 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xgkqk" event={"ID":"92e65cdf-ee57-4aed-91ca-644ebdc03a27","Type":"ContainerStarted","Data":"a2708e95fa1df4c0de8569b901c6a4bfcf7de25e0a185b29d6bf61703de14b8d"} Feb 19 10:10:52 crc kubenswrapper[4811]: I0219 10:10:52.076689 4811 patch_prober.go:28] interesting pod/downloads-7954f5f757-t44xg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 19 10:10:52 crc kubenswrapper[4811]: I0219 10:10:52.076760 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t44xg" podUID="47f24fcb-a536-4173-b099-9fadf4be99c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 19 10:10:52 crc kubenswrapper[4811]: I0219 10:10:52.084909 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-24tbz" Feb 19 10:10:52 crc kubenswrapper[4811]: I0219 10:10:52.090077 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:52 crc kubenswrapper[4811]: E0219 10:10:52.091596 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:52.591575407 +0000 UTC m=+141.118040093 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:52 crc kubenswrapper[4811]: I0219 10:10:52.201635 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:52 crc kubenswrapper[4811]: E0219 10:10:52.231814 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:52.731727432 +0000 UTC m=+141.258192118 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:52 crc kubenswrapper[4811]: I0219 10:10:52.308939 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:52 crc kubenswrapper[4811]: E0219 10:10:52.309616 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:52.80958772 +0000 UTC m=+141.336052406 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:52 crc kubenswrapper[4811]: I0219 10:10:52.386000 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-w5ffb"] Feb 19 10:10:52 crc kubenswrapper[4811]: I0219 10:10:52.400395 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-dz4cb"] Feb 19 10:10:52 crc kubenswrapper[4811]: I0219 10:10:52.411395 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:52 crc kubenswrapper[4811]: E0219 10:10:52.411904 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:52.911885886 +0000 UTC m=+141.438350572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:52 crc kubenswrapper[4811]: I0219 10:10:52.515400 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:52 crc kubenswrapper[4811]: E0219 10:10:52.515807 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:53.015764141 +0000 UTC m=+141.542228827 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:52 crc kubenswrapper[4811]: I0219 10:10:52.530947 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:52 crc kubenswrapper[4811]: E0219 10:10:52.531593 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:53.031555345 +0000 UTC m=+141.558020021 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:52 crc kubenswrapper[4811]: I0219 10:10:52.632748 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:52 crc kubenswrapper[4811]: E0219 10:10:52.634224 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:53.134194061 +0000 UTC m=+141.660658757 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:52 crc kubenswrapper[4811]: I0219 10:10:52.745203 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:52 crc kubenswrapper[4811]: E0219 10:10:52.746790 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:53.246768692 +0000 UTC m=+141.773233378 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:52 crc kubenswrapper[4811]: I0219 10:10:52.849359 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:52 crc kubenswrapper[4811]: E0219 10:10:52.849939 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:53.349912159 +0000 UTC m=+141.876376845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:52 crc kubenswrapper[4811]: I0219 10:10:52.911562 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-84dtz"] Feb 19 10:10:52 crc kubenswrapper[4811]: I0219 10:10:52.926357 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-b5p7b"] Feb 19 10:10:52 crc kubenswrapper[4811]: I0219 10:10:52.937652 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zhzk5" podStartSLOduration=117.93762332 podStartE2EDuration="1m57.93762332s" podCreationTimestamp="2026-02-19 10:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:52.935983391 +0000 UTC m=+141.462448087" watchObservedRunningTime="2026-02-19 10:10:52.93762332 +0000 UTC m=+141.464088006" Feb 19 10:10:52 crc kubenswrapper[4811]: I0219 10:10:52.963063 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:52 crc kubenswrapper[4811]: E0219 10:10:52.963551 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:53.463530305 +0000 UTC m=+141.989994991 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:52 crc kubenswrapper[4811]: I0219 10:10:52.968175 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-r462p"] Feb 19 10:10:52 crc kubenswrapper[4811]: I0219 10:10:52.994725 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-dh45w" podStartSLOduration=117.994686944 podStartE2EDuration="1m57.994686944s" podCreationTimestamp="2026-02-19 10:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:52.978355447 +0000 UTC m=+141.504820133" watchObservedRunningTime="2026-02-19 10:10:52.994686944 +0000 UTC m=+141.521151630" Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.046415 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kpjc5" podStartSLOduration=117.046382561 podStartE2EDuration="1m57.046382561s" podCreationTimestamp="2026-02-19 10:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:53.045406357 +0000 UTC m=+141.571871043" watchObservedRunningTime="2026-02-19 10:10:53.046382561 +0000 UTC m=+141.572847247" Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.064299 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:53 crc kubenswrapper[4811]: E0219 10:10:53.065109 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:53.565076964 +0000 UTC m=+142.091541650 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.105134 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w6gw"] Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.112785 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vxgv9" podStartSLOduration=118.112743295 podStartE2EDuration="1m58.112743295s" podCreationTimestamp="2026-02-19 10:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:53.092962446 +0000 UTC m=+141.619427122" watchObservedRunningTime="2026-02-19 10:10:53.112743295 +0000 UTC m=+141.639207981" Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.165551 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-24tbz" podStartSLOduration=118.165515567 podStartE2EDuration="1m58.165515567s" podCreationTimestamp="2026-02-19 10:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:53.162292111 +0000 UTC m=+141.688756797" watchObservedRunningTime="2026-02-19 10:10:53.165515567 +0000 UTC m=+141.691980253" Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.166095 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:53 crc kubenswrapper[4811]: E0219 10:10:53.174139 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:53.674098471 +0000 UTC m=+142.200563147 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:53 crc kubenswrapper[4811]: W0219 10:10:53.174613 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ab75343_a016_48d3_930c_e391f08dc9e8.slice/crio-d5ea66e33990c524874ae0525932791f5f13df74245b47aa8cf652702bcabfcc WatchSource:0}: Error finding container d5ea66e33990c524874ae0525932791f5f13df74245b47aa8cf652702bcabfcc: Status 404 returned error can't find the container with id d5ea66e33990c524874ae0525932791f5f13df74245b47aa8cf652702bcabfcc Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.234065 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w5ffb" event={"ID":"fab44d7c-1d10-4b1f-b173-b06fff08c286","Type":"ContainerStarted","Data":"6c045b3965f8fe4b5e787038ce7f59e3e631ba84d62ec68350ff63dec49b9b6f"} Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.239045 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-87n5j" podStartSLOduration=118.239020501 podStartE2EDuration="1m58.239020501s" podCreationTimestamp="2026-02-19 10:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:53.234438103 +0000 UTC m=+141.760902789" watchObservedRunningTime="2026-02-19 10:10:53.239020501 +0000 UTC m=+141.765485187" Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.274853 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:53 crc kubenswrapper[4811]: E0219 10:10:53.275395 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:53.775370934 +0000 UTC m=+142.301835610 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.309283 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-t44xg" podStartSLOduration=118.309255348 podStartE2EDuration="1m58.309255348s" podCreationTimestamp="2026-02-19 10:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:53.296474675 +0000 UTC m=+141.822939361" watchObservedRunningTime="2026-02-19 10:10:53.309255348 +0000 UTC m=+141.835720034" Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.312711 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zrd6d"] Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.336308 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zrrrs" event={"ID":"37172e17-0240-46d9-878a-f396d66ea307","Type":"ContainerStarted","Data":"1c53e4844aee6a04c634c69757bac633ac6a4de4b32801d1e11df1da9f1c7436"} Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.347442 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s595q" podStartSLOduration=117.347408153 podStartE2EDuration="1m57.347408153s" podCreationTimestamp="2026-02-19 10:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:53.342869515 +0000 UTC m=+141.869334201" watchObservedRunningTime="2026-02-19 10:10:53.347408153 +0000 UTC m=+141.873872839" Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.370539 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vxgv9" event={"ID":"0b2a3a7d-590d-410c-a4b7-6e7a7729fc3d","Type":"ContainerStarted","Data":"9813e61470d73989824dc88f72ecc84c8930c63cb16e62fde760717222d790e5"} Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.380163 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:53 crc kubenswrapper[4811]: E0219 10:10:53.380611 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:53.88059218 +0000 UTC m=+142.407056866 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.387139 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" event={"ID":"2f61e647-ba04-4c85-9e4e-b58c3823b3a2","Type":"ContainerStarted","Data":"fc0de6666549961ff86be650428caa06eefe9d09e2a99faf578061e4fa6df9ec"} Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.391722 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-bf2qf" podStartSLOduration=117.391700254 podStartE2EDuration="1m57.391700254s" podCreationTimestamp="2026-02-19 10:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:53.389617714 +0000 UTC m=+141.916082400" watchObservedRunningTime="2026-02-19 10:10:53.391700254 +0000 UTC m=+141.918164940" Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.406087 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6vsq7" event={"ID":"5e361b2e-edcc-418f-861e-83b65fc2634a","Type":"ContainerStarted","Data":"e7034d1aee2db1b6f26cd65e76e804cf17ec02176bf24437e2c6bf0a4f12f5e2"} Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.415067 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bbw9j" event={"ID":"d19195ba-d354-4a69-bc60-364ba77eead8","Type":"ContainerStarted","Data":"8bcba37dd71e5cb1704da0000492b1f40dd3b8e86962418a8a3231ce46ab40c2"} Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.455422 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xgkqk" podStartSLOduration=118.455343154 podStartE2EDuration="1m58.455343154s" podCreationTimestamp="2026-02-19 10:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:53.432064042 +0000 UTC m=+141.958528738" watchObservedRunningTime="2026-02-19 10:10:53.455343154 +0000 UTC m=+141.981807850" Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.462768 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xgkqk" event={"ID":"92e65cdf-ee57-4aed-91ca-644ebdc03a27","Type":"ContainerStarted","Data":"eb0757f07d2227cd1b871fe84af8af314a70fd53c460a60a48800abe4a543443"} Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.469919 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-dz4cb" event={"ID":"4eecb463-84f2-45b0-8281-be71a8ec27e9","Type":"ContainerStarted","Data":"c881b71adf47b262bb74bab4ffeafb33df5f1c61fa844c501d7f24434e665bc4"} Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.471308 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-84dtz" event={"ID":"1fcdf22f-c384-45cd-a738-a6ec8a9d181e","Type":"ContainerStarted","Data":"3b6c505adc02994be90b3a6d040823c737667ea95e01b144a3509f59532e1c83"} Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.477411 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8j7vw" podStartSLOduration=118.477378647 podStartE2EDuration="1m58.477378647s" podCreationTimestamp="2026-02-19 10:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:53.461269825 +0000 UTC m=+141.987734511" watchObservedRunningTime="2026-02-19 10:10:53.477378647 +0000 UTC m=+142.003843333" Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.480975 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-bsbln" event={"ID":"96e56630-78f7-4f76-9d24-25ca68e8fa0a","Type":"ContainerStarted","Data":"94dff409b6d07b4d8773754507deeddd60698c6b6603d045b9ef694f033bfe43"} Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.482089 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:53 crc kubenswrapper[4811]: E0219 10:10:53.484067 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:53.984040175 +0000 UTC m=+142.510504861 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.500156 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-tnfdb" event={"ID":"616355ce-0d45-4eb0-bcf2-0e58f2fb7fcf","Type":"ContainerStarted","Data":"b10c8f87bc0ca5581b75fe649e7c026eb390da968dc05c0353dc12d8f3382bd2"} Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.521988 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-blkmf" event={"ID":"5e93440f-9668-477a-8df8-a8da4d088cc8","Type":"ContainerStarted","Data":"106908eca37a43e2f924c0fcb912aacf0bc1c7a1e4cde848128294581a6c1a6f"} Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.531461 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-cmg4v" event={"ID":"30ea8030-3330-42dc-bc17-00a8058c7b04","Type":"ContainerStarted","Data":"3d465b22a78557ec656f20949d891276470b4a1e2d55097ea2fb181a84acd2ac"} Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.549783 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nx5d7" podStartSLOduration=117.549755074 podStartE2EDuration="1m57.549755074s" podCreationTimestamp="2026-02-19 10:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:53.503969078 +0000 UTC m=+142.030433784" watchObservedRunningTime="2026-02-19 10:10:53.549755074 +0000 UTC m=+142.076219760" Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.551875 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-d8j77" podStartSLOduration=118.551867824 podStartE2EDuration="1m58.551867824s" podCreationTimestamp="2026-02-19 10:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:53.55125089 +0000 UTC m=+142.077715576" watchObservedRunningTime="2026-02-19 10:10:53.551867824 +0000 UTC m=+142.078332510" Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.570206 4811 csr.go:261] certificate signing request csr-xmnlq is approved, waiting to be issued Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.577344 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6vsq7" podStartSLOduration=118.577304708 podStartE2EDuration="1m58.577304708s" podCreationTimestamp="2026-02-19 10:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:53.576042558 +0000 UTC m=+142.102507244" watchObservedRunningTime="2026-02-19 10:10:53.577304708 +0000 UTC m=+142.103769394" Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.585998 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:53 crc kubenswrapper[4811]: E0219 10:10:53.588403 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:54.088385421 +0000 UTC m=+142.614850107 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.593859 4811 csr.go:257] certificate signing request csr-xmnlq is issued Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.624546 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2jmvs" podStartSLOduration=119.624523248 podStartE2EDuration="1m59.624523248s" podCreationTimestamp="2026-02-19 10:08:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:53.624408975 +0000 UTC m=+142.150873681" watchObservedRunningTime="2026-02-19 10:10:53.624523248 +0000 UTC m=+142.150987924" Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.624982 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s595q" Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.675153 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bbw9j" podStartSLOduration=117.675125329 podStartE2EDuration="1m57.675125329s" podCreationTimestamp="2026-02-19 10:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:53.671859941 +0000 UTC m=+142.198324627" watchObservedRunningTime="2026-02-19 10:10:53.675125329 +0000 UTC m=+142.201590015" Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.687706 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:53 crc kubenswrapper[4811]: E0219 10:10:53.687942 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:54.187910562 +0000 UTC m=+142.714375248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.688320 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:53 crc kubenswrapper[4811]: E0219 10:10:53.688762 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:54.188753842 +0000 UTC m=+142.715218528 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.720023 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-cmg4v" podStartSLOduration=118.719997863 podStartE2EDuration="1m58.719997863s" podCreationTimestamp="2026-02-19 10:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:53.718875587 +0000 UTC m=+142.245340273" watchObservedRunningTime="2026-02-19 10:10:53.719997863 +0000 UTC m=+142.246462539" Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.762369 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-blkmf" podStartSLOduration=117.762329958 podStartE2EDuration="1m57.762329958s" podCreationTimestamp="2026-02-19 10:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:53.760193877 +0000 UTC m=+142.286658563" watchObservedRunningTime="2026-02-19 10:10:53.762329958 +0000 UTC m=+142.288794644" Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.790002 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:53 crc kubenswrapper[4811]: E0219 10:10:53.790662 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:54.29063759 +0000 UTC m=+142.817102276 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.804334 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-tnfdb" podStartSLOduration=6.804301694 podStartE2EDuration="6.804301694s" podCreationTimestamp="2026-02-19 10:10:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:53.796824866 +0000 UTC m=+142.323289552" watchObservedRunningTime="2026-02-19 10:10:53.804301694 +0000 UTC m=+142.330766380" Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.831731 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zrrrs" podStartSLOduration=119.831706674 podStartE2EDuration="1m59.831706674s" podCreationTimestamp="2026-02-19 10:08:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:53.83070176 +0000 UTC m=+142.357166466" watchObservedRunningTime="2026-02-19 10:10:53.831706674 +0000 UTC m=+142.358171360" Feb 19 10:10:53 crc kubenswrapper[4811]: I0219 10:10:53.897380 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:53 crc kubenswrapper[4811]: E0219 10:10:53.898041 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:54.398022717 +0000 UTC m=+142.924487393 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:53 crc kubenswrapper[4811]: W0219 10:10:53.971332 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod420c2489_d813_4281_8a38_ecc47f25b991.slice/crio-3ce52abd594512e87e13fffc759b0ba743890849117c2b7010ce11f6179ccbb3 WatchSource:0}: Error finding container 3ce52abd594512e87e13fffc759b0ba743890849117c2b7010ce11f6179ccbb3: Status 404 returned error can't find the container with id 3ce52abd594512e87e13fffc759b0ba743890849117c2b7010ce11f6179ccbb3 Feb 19 10:10:54 crc kubenswrapper[4811]: I0219 10:10:54.000545 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:54 crc kubenswrapper[4811]: E0219 10:10:54.001083 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:54.501059942 +0000 UTC m=+143.027524638 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:54 crc kubenswrapper[4811]: I0219 10:10:54.086298 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kpjc5" Feb 19 10:10:54 crc kubenswrapper[4811]: I0219 10:10:54.104600 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:54 crc kubenswrapper[4811]: E0219 10:10:54.105138 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:54.605119161 +0000 UTC m=+143.131583847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:54 crc kubenswrapper[4811]: I0219 10:10:54.209121 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:54 crc kubenswrapper[4811]: E0219 10:10:54.209792 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:54.709760964 +0000 UTC m=+143.236225650 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:54 crc kubenswrapper[4811]: I0219 10:10:54.213572 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nx5d7" Feb 19 10:10:54 crc kubenswrapper[4811]: I0219 10:10:54.213659 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nx5d7" Feb 19 10:10:54 crc kubenswrapper[4811]: I0219 10:10:54.317630 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:54 crc kubenswrapper[4811]: E0219 10:10:54.318306 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:54.818284699 +0000 UTC m=+143.344749385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:54 crc kubenswrapper[4811]: I0219 10:10:54.345611 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8j7vw" Feb 19 10:10:54 crc kubenswrapper[4811]: I0219 10:10:54.422884 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:54 crc kubenswrapper[4811]: E0219 10:10:54.423914 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:54.923892825 +0000 UTC m=+143.450357511 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:54 crc kubenswrapper[4811]: I0219 10:10:54.425577 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-cmg4v" Feb 19 10:10:54 crc kubenswrapper[4811]: I0219 10:10:54.465736 4811 patch_prober.go:28] interesting pod/router-default-5444994796-cmg4v container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 19 10:10:54 crc kubenswrapper[4811]: I0219 10:10:54.465824 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cmg4v" podUID="30ea8030-3330-42dc-bc17-00a8058c7b04" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 19 10:10:54 crc kubenswrapper[4811]: I0219 10:10:54.482966 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-d8j77" Feb 19 10:10:54 crc kubenswrapper[4811]: I0219 10:10:54.532191 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:54 crc kubenswrapper[4811]: E0219 10:10:54.535186 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:55.035162715 +0000 UTC m=+143.561627401 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:54 crc kubenswrapper[4811]: I0219 10:10:54.607936 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-19 10:05:53 +0000 UTC, rotation deadline is 2026-12-22 21:18:01.071030716 +0000 UTC Feb 19 10:10:54 crc kubenswrapper[4811]: I0219 10:10:54.607974 4811 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7355h7m6.463059644s for next certificate rotation Feb 19 10:10:54 crc kubenswrapper[4811]: I0219 10:10:54.634527 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:54 crc kubenswrapper[4811]: E0219 10:10:54.635214 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:55.135189919 +0000 UTC m=+143.661654605 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:54 crc kubenswrapper[4811]: I0219 10:10:54.653019 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-dz4cb" event={"ID":"4eecb463-84f2-45b0-8281-be71a8ec27e9","Type":"ContainerStarted","Data":"eb196b8c5bbb006f39f6949ea807ba27853d978b346e4e5bfc7f4891bf41be05"} Feb 19 10:10:54 crc kubenswrapper[4811]: I0219 10:10:54.666175 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" event={"ID":"f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086","Type":"ContainerStarted","Data":"1a110de6bd28b68c26d0bfe5656b8dab0e94d990f0932e85fe1f89916597ea26"} Feb 19 10:10:54 crc kubenswrapper[4811]: I0219 10:10:54.688735 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w5ffb" event={"ID":"fab44d7c-1d10-4b1f-b173-b06fff08c286","Type":"ContainerStarted","Data":"b39529f1ead70de155acb5465a79be70ac3662bf5c33bd591e8985581085f7a4"} Feb 19 10:10:54 crc kubenswrapper[4811]: I0219 10:10:54.716771 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w6gw" event={"ID":"420c2489-d813-4281-8a38-ecc47f25b991","Type":"ContainerStarted","Data":"3ce52abd594512e87e13fffc759b0ba743890849117c2b7010ce11f6179ccbb3"} Feb 19 10:10:54 crc kubenswrapper[4811]: I0219 10:10:54.743219 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:54 crc kubenswrapper[4811]: E0219 10:10:54.744373 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:55.244361618 +0000 UTC m=+143.770826304 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:54 crc kubenswrapper[4811]: I0219 10:10:54.746848 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r462p" event={"ID":"3ab75343-a016-48d3-930c-e391f08dc9e8","Type":"ContainerStarted","Data":"d5ea66e33990c524874ae0525932791f5f13df74245b47aa8cf652702bcabfcc"} Feb 19 10:10:54 crc kubenswrapper[4811]: I0219 10:10:54.769740 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-zrd6d" event={"ID":"4dfe3823-e78b-4f62-b928-96b36d172dac","Type":"ContainerStarted","Data":"ade37f7f065d51a6d9622b2f7ba1f2372aeac2ae60cce988bf0e10c661b0ae40"} Feb 19 10:10:54 crc kubenswrapper[4811]: I0219 10:10:54.824257 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-cmg4v" event={"ID":"30ea8030-3330-42dc-bc17-00a8058c7b04","Type":"ContainerStarted","Data":"f33b4e58b3f8a4bd38c04b0cf7b4ef6ff907bf808de7d8b3c6c6eb21ae048bc4"} Feb 19 10:10:54 crc kubenswrapper[4811]: I0219 10:10:54.861664 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:54 crc kubenswrapper[4811]: E0219 10:10:54.861879 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:55.361841356 +0000 UTC m=+143.888306042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:54 crc kubenswrapper[4811]: I0219 10:10:54.862225 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:54 crc kubenswrapper[4811]: E0219 10:10:54.862805 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:55.362785488 +0000 UTC m=+143.889250184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:54 crc kubenswrapper[4811]: I0219 10:10:54.882003 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-tnfdb" event={"ID":"616355ce-0d45-4eb0-bcf2-0e58f2fb7fcf","Type":"ContainerStarted","Data":"612a8863f022b03ac477f82eaa2289e561b055332a07c1c7d8e7e40e5b48fa8f"} Feb 19 10:10:54 crc kubenswrapper[4811]: I0219 10:10:54.898042 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-dz4cb" podStartSLOduration=118.897998054 podStartE2EDuration="1m58.897998054s" podCreationTimestamp="2026-02-19 10:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:54.89616883 +0000 UTC m=+143.422633516" watchObservedRunningTime="2026-02-19 10:10:54.897998054 +0000 UTC m=+143.424462740" Feb 19 10:10:54 crc kubenswrapper[4811]: I0219 10:10:54.948304 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-569hh"] Feb 19 10:10:54 crc kubenswrapper[4811]: I0219 10:10:54.964137 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:54 crc kubenswrapper[4811]: E0219 10:10:54.965082 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:55.465015624 +0000 UTC m=+143.991480310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.036183 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nx5d7" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.068803 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-c46m7"] Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.078168 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rsctk"] Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.090023 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:55 crc kubenswrapper[4811]: E0219 10:10:55.095056 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:55.595021979 +0000 UTC m=+144.121486665 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.127055 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pctn2"] Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.139696 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cpcsw"] Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.140926 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cpcsw" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.146049 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pctn2" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.153992 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.162109 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.191225 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.191420 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wf6f\" (UniqueName: \"kubernetes.io/projected/c2f288dc-ac27-4a76-84da-d3f4e2dccf77-kube-api-access-7wf6f\") pod \"community-operators-cpcsw\" (UID: \"c2f288dc-ac27-4a76-84da-d3f4e2dccf77\") " pod="openshift-marketplace/community-operators-cpcsw" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.191480 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/006d82c1-543d-45da-8e4b-75a557df7959-utilities\") pod \"certified-operators-pctn2\" (UID: \"006d82c1-543d-45da-8e4b-75a557df7959\") " pod="openshift-marketplace/certified-operators-pctn2" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.191519 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2f288dc-ac27-4a76-84da-d3f4e2dccf77-catalog-content\") pod \"community-operators-cpcsw\" (UID: \"c2f288dc-ac27-4a76-84da-d3f4e2dccf77\") " pod="openshift-marketplace/community-operators-cpcsw" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.191537 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9slxr\" (UniqueName: \"kubernetes.io/projected/006d82c1-543d-45da-8e4b-75a557df7959-kube-api-access-9slxr\") pod \"certified-operators-pctn2\" (UID: \"006d82c1-543d-45da-8e4b-75a557df7959\") " pod="openshift-marketplace/certified-operators-pctn2" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.191581 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/006d82c1-543d-45da-8e4b-75a557df7959-catalog-content\") pod \"certified-operators-pctn2\" (UID: \"006d82c1-543d-45da-8e4b-75a557df7959\") " pod="openshift-marketplace/certified-operators-pctn2" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.191607 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2f288dc-ac27-4a76-84da-d3f4e2dccf77-utilities\") pod \"community-operators-cpcsw\" (UID: \"c2f288dc-ac27-4a76-84da-d3f4e2dccf77\") " pod="openshift-marketplace/community-operators-cpcsw" Feb 19 10:10:55 crc kubenswrapper[4811]: E0219 10:10:55.191819 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:55.691796015 +0000 UTC m=+144.218260701 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.208795 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pctn2"] Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.221528 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-9668s"] Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.221613 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cpcsw"] Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.275849 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-ntk89"] Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.292753 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/006d82c1-543d-45da-8e4b-75a557df7959-utilities\") pod \"certified-operators-pctn2\" (UID: \"006d82c1-543d-45da-8e4b-75a557df7959\") " pod="openshift-marketplace/certified-operators-pctn2" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.292816 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.292841 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2f288dc-ac27-4a76-84da-d3f4e2dccf77-catalog-content\") pod \"community-operators-cpcsw\" (UID: \"c2f288dc-ac27-4a76-84da-d3f4e2dccf77\") " pod="openshift-marketplace/community-operators-cpcsw" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.292869 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9slxr\" (UniqueName: \"kubernetes.io/projected/006d82c1-543d-45da-8e4b-75a557df7959-kube-api-access-9slxr\") pod \"certified-operators-pctn2\" (UID: \"006d82c1-543d-45da-8e4b-75a557df7959\") " pod="openshift-marketplace/certified-operators-pctn2" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.292912 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/006d82c1-543d-45da-8e4b-75a557df7959-catalog-content\") pod \"certified-operators-pctn2\" (UID: \"006d82c1-543d-45da-8e4b-75a557df7959\") " pod="openshift-marketplace/certified-operators-pctn2" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.292937 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2f288dc-ac27-4a76-84da-d3f4e2dccf77-utilities\") pod \"community-operators-cpcsw\" (UID: \"c2f288dc-ac27-4a76-84da-d3f4e2dccf77\") " pod="openshift-marketplace/community-operators-cpcsw" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.293006 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wf6f\" (UniqueName: \"kubernetes.io/projected/c2f288dc-ac27-4a76-84da-d3f4e2dccf77-kube-api-access-7wf6f\") pod \"community-operators-cpcsw\" (UID: \"c2f288dc-ac27-4a76-84da-d3f4e2dccf77\") " pod="openshift-marketplace/community-operators-cpcsw" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.293986 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/006d82c1-543d-45da-8e4b-75a557df7959-utilities\") pod \"certified-operators-pctn2\" (UID: \"006d82c1-543d-45da-8e4b-75a557df7959\") " pod="openshift-marketplace/certified-operators-pctn2" Feb 19 10:10:55 crc kubenswrapper[4811]: E0219 10:10:55.294319 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:55.794304017 +0000 UTC m=+144.320768703 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.295935 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/006d82c1-543d-45da-8e4b-75a557df7959-catalog-content\") pod \"certified-operators-pctn2\" (UID: \"006d82c1-543d-45da-8e4b-75a557df7959\") " pod="openshift-marketplace/certified-operators-pctn2" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.330272 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ksjbn"] Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.332795 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ksjbn" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.359706 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ksjbn"] Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.386826 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2f288dc-ac27-4a76-84da-d3f4e2dccf77-utilities\") pod \"community-operators-cpcsw\" (UID: \"c2f288dc-ac27-4a76-84da-d3f4e2dccf77\") " pod="openshift-marketplace/community-operators-cpcsw" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.388948 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9slxr\" (UniqueName: \"kubernetes.io/projected/006d82c1-543d-45da-8e4b-75a557df7959-kube-api-access-9slxr\") pod \"certified-operators-pctn2\" (UID: \"006d82c1-543d-45da-8e4b-75a557df7959\") " pod="openshift-marketplace/certified-operators-pctn2" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.389269 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2f288dc-ac27-4a76-84da-d3f4e2dccf77-catalog-content\") pod \"community-operators-cpcsw\" (UID: \"c2f288dc-ac27-4a76-84da-d3f4e2dccf77\") " pod="openshift-marketplace/community-operators-cpcsw" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.399046 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.399542 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2ljp"] Feb 19 10:10:55 crc kubenswrapper[4811]: E0219 10:10:55.399658 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:55.899618376 +0000 UTC m=+144.426083062 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.407709 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wf6f\" (UniqueName: \"kubernetes.io/projected/c2f288dc-ac27-4a76-84da-d3f4e2dccf77-kube-api-access-7wf6f\") pod \"community-operators-cpcsw\" (UID: \"c2f288dc-ac27-4a76-84da-d3f4e2dccf77\") " pod="openshift-marketplace/community-operators-cpcsw" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.431544 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f2zf8"] Feb 19 10:10:55 crc kubenswrapper[4811]: W0219 10:10:55.431978 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda71943f3_8cb9_48ff_aaf0_38a30132ffd4.slice/crio-59907c8f92773bae5c56f8f12c51cbff6e787486140d7ef638b03ae77b504daf WatchSource:0}: Error finding container 59907c8f92773bae5c56f8f12c51cbff6e787486140d7ef638b03ae77b504daf: Status 404 returned error can't find the container with id 59907c8f92773bae5c56f8f12c51cbff6e787486140d7ef638b03ae77b504daf Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.438170 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-kfhx5"] Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.463958 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k4gqt"] Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.465361 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k4gqt" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.485094 4811 patch_prober.go:28] interesting pod/router-default-5444994796-cmg4v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 10:10:55 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Feb 19 10:10:55 crc kubenswrapper[4811]: [+]process-running ok Feb 19 10:10:55 crc kubenswrapper[4811]: healthz check failed Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.485177 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cmg4v" podUID="30ea8030-3330-42dc-bc17-00a8058c7b04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.498648 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k4gqt"] Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.503768 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca27a1be-f8a5-447f-b61f-da206a041290-utilities\") pod \"certified-operators-ksjbn\" (UID: \"ca27a1be-f8a5-447f-b61f-da206a041290\") " pod="openshift-marketplace/certified-operators-ksjbn" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.503916 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca27a1be-f8a5-447f-b61f-da206a041290-catalog-content\") pod \"certified-operators-ksjbn\" (UID: \"ca27a1be-f8a5-447f-b61f-da206a041290\") " pod="openshift-marketplace/certified-operators-ksjbn" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.503956 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.503983 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm4rp\" (UniqueName: \"kubernetes.io/projected/ca27a1be-f8a5-447f-b61f-da206a041290-kube-api-access-qm4rp\") pod \"certified-operators-ksjbn\" (UID: \"ca27a1be-f8a5-447f-b61f-da206a041290\") " pod="openshift-marketplace/certified-operators-ksjbn" Feb 19 10:10:55 crc kubenswrapper[4811]: E0219 10:10:55.505083 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:56.005067958 +0000 UTC m=+144.531532644 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.529575 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-l7559"] Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.549369 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mxhh7"] Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.567283 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-8jqkt"] Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.578074 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524920-kwjxq"] Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.595576 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pctn2" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.623955 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.624443 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4c2ec9c-52c4-442e-adf8-eddb1d74d250-catalog-content\") pod \"community-operators-k4gqt\" (UID: \"c4c2ec9c-52c4-442e-adf8-eddb1d74d250\") " pod="openshift-marketplace/community-operators-k4gqt" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.624500 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h87cn\" (UniqueName: \"kubernetes.io/projected/c4c2ec9c-52c4-442e-adf8-eddb1d74d250-kube-api-access-h87cn\") pod \"community-operators-k4gqt\" (UID: \"c4c2ec9c-52c4-442e-adf8-eddb1d74d250\") " pod="openshift-marketplace/community-operators-k4gqt" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.624527 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca27a1be-f8a5-447f-b61f-da206a041290-utilities\") pod \"certified-operators-ksjbn\" (UID: \"ca27a1be-f8a5-447f-b61f-da206a041290\") " pod="openshift-marketplace/certified-operators-ksjbn" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.624579 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4c2ec9c-52c4-442e-adf8-eddb1d74d250-utilities\") pod \"community-operators-k4gqt\" (UID: \"c4c2ec9c-52c4-442e-adf8-eddb1d74d250\") " pod="openshift-marketplace/community-operators-k4gqt" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.624600 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca27a1be-f8a5-447f-b61f-da206a041290-catalog-content\") pod \"certified-operators-ksjbn\" (UID: \"ca27a1be-f8a5-447f-b61f-da206a041290\") " pod="openshift-marketplace/certified-operators-ksjbn" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.624645 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm4rp\" (UniqueName: \"kubernetes.io/projected/ca27a1be-f8a5-447f-b61f-da206a041290-kube-api-access-qm4rp\") pod \"certified-operators-ksjbn\" (UID: \"ca27a1be-f8a5-447f-b61f-da206a041290\") " pod="openshift-marketplace/certified-operators-ksjbn" Feb 19 10:10:55 crc kubenswrapper[4811]: E0219 10:10:55.638068 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:56.138007682 +0000 UTC m=+144.664472378 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.639228 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca27a1be-f8a5-447f-b61f-da206a041290-utilities\") pod \"certified-operators-ksjbn\" (UID: \"ca27a1be-f8a5-447f-b61f-da206a041290\") " pod="openshift-marketplace/certified-operators-ksjbn" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.640393 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca27a1be-f8a5-447f-b61f-da206a041290-catalog-content\") pod \"certified-operators-ksjbn\" (UID: \"ca27a1be-f8a5-447f-b61f-da206a041290\") " pod="openshift-marketplace/certified-operators-ksjbn" Feb 19 10:10:55 crc kubenswrapper[4811]: W0219 10:10:55.644663 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28d91e3a_4d4d_42c7_8f03_d44e6e539e66.slice/crio-360f982303070976b46bf6e21c5404029adb74ae92e7c7cf644609fac53fb29e WatchSource:0}: Error finding container 360f982303070976b46bf6e21c5404029adb74ae92e7c7cf644609fac53fb29e: Status 404 returned error can't find the container with id 360f982303070976b46bf6e21c5404029adb74ae92e7c7cf644609fac53fb29e Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.672010 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm4rp\" (UniqueName: \"kubernetes.io/projected/ca27a1be-f8a5-447f-b61f-da206a041290-kube-api-access-qm4rp\") pod \"certified-operators-ksjbn\" (UID: \"ca27a1be-f8a5-447f-b61f-da206a041290\") " pod="openshift-marketplace/certified-operators-ksjbn" Feb 19 10:10:55 crc kubenswrapper[4811]: W0219 10:10:55.678610 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89c5cf4e_a301_48d2_8d31_056b6afc187b.slice/crio-1420797e62ca4bdef3798fa3ce5a39a66c724b426135e0aa1b3d100cd4fccf26 WatchSource:0}: Error finding container 1420797e62ca4bdef3798fa3ce5a39a66c724b426135e0aa1b3d100cd4fccf26: Status 404 returned error can't find the container with id 1420797e62ca4bdef3798fa3ce5a39a66c724b426135e0aa1b3d100cd4fccf26 Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.692323 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cpcsw" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.707162 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ksjbn" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.727817 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4c2ec9c-52c4-442e-adf8-eddb1d74d250-catalog-content\") pod \"community-operators-k4gqt\" (UID: \"c4c2ec9c-52c4-442e-adf8-eddb1d74d250\") " pod="openshift-marketplace/community-operators-k4gqt" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.727873 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h87cn\" (UniqueName: \"kubernetes.io/projected/c4c2ec9c-52c4-442e-adf8-eddb1d74d250-kube-api-access-h87cn\") pod \"community-operators-k4gqt\" (UID: \"c4c2ec9c-52c4-442e-adf8-eddb1d74d250\") " pod="openshift-marketplace/community-operators-k4gqt" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.727917 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4c2ec9c-52c4-442e-adf8-eddb1d74d250-utilities\") pod \"community-operators-k4gqt\" (UID: \"c4c2ec9c-52c4-442e-adf8-eddb1d74d250\") " pod="openshift-marketplace/community-operators-k4gqt" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.727953 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:55 crc kubenswrapper[4811]: E0219 10:10:55.728297 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:56.228282494 +0000 UTC m=+144.754747190 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.728779 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4c2ec9c-52c4-442e-adf8-eddb1d74d250-catalog-content\") pod \"community-operators-k4gqt\" (UID: \"c4c2ec9c-52c4-442e-adf8-eddb1d74d250\") " pod="openshift-marketplace/community-operators-k4gqt" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.729306 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4c2ec9c-52c4-442e-adf8-eddb1d74d250-utilities\") pod \"community-operators-k4gqt\" (UID: \"c4c2ec9c-52c4-442e-adf8-eddb1d74d250\") " pod="openshift-marketplace/community-operators-k4gqt" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.751226 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h87cn\" (UniqueName: \"kubernetes.io/projected/c4c2ec9c-52c4-442e-adf8-eddb1d74d250-kube-api-access-h87cn\") pod \"community-operators-k4gqt\" (UID: \"c4c2ec9c-52c4-442e-adf8-eddb1d74d250\") " pod="openshift-marketplace/community-operators-k4gqt" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.831066 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.831905 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k4gqt" Feb 19 10:10:55 crc kubenswrapper[4811]: E0219 10:10:55.832527 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:56.332415255 +0000 UTC m=+144.858880141 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.907621 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-kwjxq" event={"ID":"3fefb44b-8bb5-44f8-be2b-4f0ada1e3899","Type":"ContainerStarted","Data":"3e75176f9464df95c7e89e8b95065f2afbf16a835a9ba29faa6144d7b7efbef9"} Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.912204 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r462p" event={"ID":"3ab75343-a016-48d3-930c-e391f08dc9e8","Type":"ContainerStarted","Data":"33d462095e89d1bd33324b6ed6a8e22814d0ad9cd2f372c61903ee7967a2e822"} Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.912280 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r462p" event={"ID":"3ab75343-a016-48d3-930c-e391f08dc9e8","Type":"ContainerStarted","Data":"161c7579ec1c3ea61d9732111f310e0f27db06ced69238bf384a457dcbc5f6dd"} Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.915296 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w6gw" event={"ID":"420c2489-d813-4281-8a38-ecc47f25b991","Type":"ContainerStarted","Data":"96a4ccd11a5079cd349065727fd2fe816b2312d54532e0bccb833196a639ed35"} Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.915808 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w6gw" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.918484 4811 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-4w6gw container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": dial tcp 10.217.0.22:5443: connect: connection refused" start-of-body= Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.918686 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w6gw" podUID="420c2489-d813-4281-8a38-ecc47f25b991" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": dial tcp 10.217.0.22:5443: connect: connection refused" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.919683 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-8jqkt" event={"ID":"28d91e3a-4d4d-42c7-8f03-d44e6e539e66","Type":"ContainerStarted","Data":"360f982303070976b46bf6e21c5404029adb74ae92e7c7cf644609fac53fb29e"} Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.921053 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-kfhx5" event={"ID":"0e00367b-32b1-47e6-b543-82deeff3d190","Type":"ContainerStarted","Data":"0b9a5da34be00d42cb2a7e9984a4cbb852a71177a18d9b6112c04768f2b49d5e"} Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.924508 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" event={"ID":"2f61e647-ba04-4c85-9e4e-b58c3823b3a2","Type":"ContainerStarted","Data":"b985de480614b363736d5a52817dc44700a02fc5707514fa154cc35743d3d0a3"} Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.924572 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.927600 4811 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-8wjls container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.37:6443/healthz\": dial tcp 10.217.0.37:6443: connect: connection refused" start-of-body= Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.927668 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" podUID="2f61e647-ba04-4c85-9e4e-b58c3823b3a2" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.37:6443/healthz\": dial tcp 10.217.0.37:6443: connect: connection refused" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.938226 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-zrd6d" event={"ID":"4dfe3823-e78b-4f62-b928-96b36d172dac","Type":"ContainerStarted","Data":"c9894491b59aafcc3fb8618ab2b3f09d62032e707795d7f7991d1ed610032b95"} Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.939807 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:55 crc kubenswrapper[4811]: E0219 10:10:55.940393 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:56.440377695 +0000 UTC m=+144.966842381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.971829 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-84dtz" event={"ID":"1fcdf22f-c384-45cd-a738-a6ec8a9d181e","Type":"ContainerStarted","Data":"19b8975fd01a72d16fa82e9db90dded13105d78e05acc4b935936a58f60c58be"} Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.974988 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-bsbln" event={"ID":"96e56630-78f7-4f76-9d24-25ca68e8fa0a","Type":"ContainerStarted","Data":"dbc18cff2a9076cf9043621e86b7b8a6b8c95147273c3b72b88c50ead92e407c"} Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.976693 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rsctk" event={"ID":"f41cc81f-afdb-461f-a860-c8abbcb84633","Type":"ContainerStarted","Data":"804962fd4128d1c67c9d1bd98ef41bee0d30fbbb2e7da0b75cbd19ce2ba3de17"} Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.977883 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-569hh" event={"ID":"fdd47192-fa50-42cb-8679-fba3f93df36f","Type":"ContainerStarted","Data":"a28e532567fcb7dc19f5c8b42c9c5c4a6b563bf5bef9f8a34a6fa4a2c59d9a9f"} Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.979038 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2ljp" event={"ID":"62dd3b62-846d-4fc2-aefe-2da8f53ac288","Type":"ContainerStarted","Data":"ef7858f5d531e061ce5026ba884c5784250ad9f421b03af714144de6f57a8c5f"} Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.980856 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w5ffb" event={"ID":"fab44d7c-1d10-4b1f-b173-b06fff08c286","Type":"ContainerStarted","Data":"d9da57b609840eacbd274745322d3927f0af25865062a3b296a4ddbe70cc7ffb"} Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.982760 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f2zf8" event={"ID":"f803b9c8-002a-46a8-9eca-6b074ef7bda3","Type":"ContainerStarted","Data":"972eb5f017e20e30a704314a66c2c996ff4b922cd611c91c2d9a46d056e0a76e"} Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.995001 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w6gw" podStartSLOduration=119.994980761 podStartE2EDuration="1m59.994980761s" podCreationTimestamp="2026-02-19 10:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:55.993632559 +0000 UTC m=+144.520097245" watchObservedRunningTime="2026-02-19 10:10:55.994980761 +0000 UTC m=+144.521445447" Feb 19 10:10:55 crc kubenswrapper[4811]: I0219 10:10:55.995336 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-9668s" event={"ID":"bd927d4d-c8dd-4c5b-a829-ab9b73ed7902","Type":"ContainerStarted","Data":"4d8b695a5c1e2c1e247554f7e5df8c6843d8c7b77f3ad74f905e102733fa0ae2"} Feb 19 10:10:56 crc kubenswrapper[4811]: I0219 10:10:56.011623 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mxhh7" event={"ID":"89c5cf4e-a301-48d2-8d31-056b6afc187b","Type":"ContainerStarted","Data":"1420797e62ca4bdef3798fa3ce5a39a66c724b426135e0aa1b3d100cd4fccf26"} Feb 19 10:10:56 crc kubenswrapper[4811]: I0219 10:10:56.025522 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l7559" event={"ID":"813c1daf-b1ec-4d19-8cb9-39b9088bb3ff","Type":"ContainerStarted","Data":"ace9aeb6b9f39a371561443f8caaf193570100029c84a44df81cc8e22725224e"} Feb 19 10:10:56 crc kubenswrapper[4811]: I0219 10:10:56.035625 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-w5ffb" podStartSLOduration=120.035597485 podStartE2EDuration="2m0.035597485s" podCreationTimestamp="2026-02-19 10:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:56.03286297 +0000 UTC m=+144.559327656" watchObservedRunningTime="2026-02-19 10:10:56.035597485 +0000 UTC m=+144.562062171" Feb 19 10:10:56 crc kubenswrapper[4811]: I0219 10:10:56.040937 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:56 crc kubenswrapper[4811]: E0219 10:10:56.042750 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:56.542723074 +0000 UTC m=+145.069187760 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:56 crc kubenswrapper[4811]: I0219 10:10:56.051023 4811 generic.go:334] "Generic (PLEG): container finished" podID="f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086" containerID="b351ac5410d24eb5d60028aafd4fded54ffa186c1b4e60cbcc0e8be66fea16ae" exitCode=0 Feb 19 10:10:56 crc kubenswrapper[4811]: I0219 10:10:56.051172 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" event={"ID":"f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086","Type":"ContainerDied","Data":"b351ac5410d24eb5d60028aafd4fded54ffa186c1b4e60cbcc0e8be66fea16ae"} Feb 19 10:10:56 crc kubenswrapper[4811]: I0219 10:10:56.086983 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-c46m7" event={"ID":"780ea2c0-bcb2-40bd-b6b8-c62cd877d72e","Type":"ContainerStarted","Data":"2dd1b6fb579b34ca02d8fc39ae13fa81d6014fafadbbedca81f680d758a60643"} Feb 19 10:10:56 crc kubenswrapper[4811]: I0219 10:10:56.105024 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ntk89" event={"ID":"a71943f3-8cb9-48ff-aaf0-38a30132ffd4","Type":"ContainerStarted","Data":"59907c8f92773bae5c56f8f12c51cbff6e787486140d7ef638b03ae77b504daf"} Feb 19 10:10:56 crc kubenswrapper[4811]: I0219 10:10:56.123738 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nx5d7" Feb 19 10:10:56 crc kubenswrapper[4811]: I0219 10:10:56.149029 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" podStartSLOduration=121.149008275 podStartE2EDuration="2m1.149008275s" podCreationTimestamp="2026-02-19 10:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:56.106476856 +0000 UTC m=+144.632941552" watchObservedRunningTime="2026-02-19 10:10:56.149008275 +0000 UTC m=+144.675472951" Feb 19 10:10:56 crc kubenswrapper[4811]: I0219 10:10:56.149595 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-84dtz" podStartSLOduration=120.149589919 podStartE2EDuration="2m0.149589919s" podCreationTimestamp="2026-02-19 10:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:56.145670436 +0000 UTC m=+144.672135122" watchObservedRunningTime="2026-02-19 10:10:56.149589919 +0000 UTC m=+144.676054605" Feb 19 10:10:56 crc kubenswrapper[4811]: I0219 10:10:56.161644 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:56 crc kubenswrapper[4811]: E0219 10:10:56.166326 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:56.666302676 +0000 UTC m=+145.192767362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:56 crc kubenswrapper[4811]: I0219 10:10:56.228922 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-zrd6d" podStartSLOduration=121.228894511 podStartE2EDuration="2m1.228894511s" podCreationTimestamp="2026-02-19 10:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:56.215664187 +0000 UTC m=+144.742128873" watchObservedRunningTime="2026-02-19 10:10:56.228894511 +0000 UTC m=+144.755359197" Feb 19 10:10:56 crc kubenswrapper[4811]: I0219 10:10:56.262723 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:56 crc kubenswrapper[4811]: E0219 10:10:56.263106 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:56.763080472 +0000 UTC m=+145.289545158 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:56 crc kubenswrapper[4811]: I0219 10:10:56.364492 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:56 crc kubenswrapper[4811]: E0219 10:10:56.365279 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:56.865261926 +0000 UTC m=+145.391726612 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:56 crc kubenswrapper[4811]: I0219 10:10:56.427641 4811 patch_prober.go:28] interesting pod/router-default-5444994796-cmg4v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 10:10:56 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Feb 19 10:10:56 crc kubenswrapper[4811]: [+]process-running ok Feb 19 10:10:56 crc kubenswrapper[4811]: healthz check failed Feb 19 10:10:56 crc kubenswrapper[4811]: I0219 10:10:56.427707 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cmg4v" podUID="30ea8030-3330-42dc-bc17-00a8058c7b04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 10:10:56 crc kubenswrapper[4811]: I0219 10:10:56.465547 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:56 crc kubenswrapper[4811]: E0219 10:10:56.465947 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:56.965920565 +0000 UTC m=+145.492385251 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:56 crc kubenswrapper[4811]: I0219 10:10:56.568009 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:56 crc kubenswrapper[4811]: E0219 10:10:56.568408 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:57.068392316 +0000 UTC m=+145.594857002 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:56 crc kubenswrapper[4811]: I0219 10:10:56.682089 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:56 crc kubenswrapper[4811]: E0219 10:10:56.683396 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:57.183364014 +0000 UTC m=+145.709828700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:56 crc kubenswrapper[4811]: I0219 10:10:56.786696 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:56 crc kubenswrapper[4811]: E0219 10:10:56.787523 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:57.287509735 +0000 UTC m=+145.813974421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:56 crc kubenswrapper[4811]: I0219 10:10:56.788598 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:10:56 crc kubenswrapper[4811]: I0219 10:10:56.788620 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:10:56 crc kubenswrapper[4811]: I0219 10:10:56.888346 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:56 crc kubenswrapper[4811]: E0219 10:10:56.888789 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:57.388758147 +0000 UTC m=+145.915222833 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:56 crc kubenswrapper[4811]: I0219 10:10:56.888957 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:56 crc kubenswrapper[4811]: E0219 10:10:56.889399 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:57.389387652 +0000 UTC m=+145.915852338 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:56 crc kubenswrapper[4811]: I0219 10:10:56.994955 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:56 crc kubenswrapper[4811]: E0219 10:10:56.995206 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:57.495164302 +0000 UTC m=+146.021628988 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:56 crc kubenswrapper[4811]: I0219 10:10:56.995928 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:56 crc kubenswrapper[4811]: E0219 10:10:56.996387 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:57.49636985 +0000 UTC m=+146.022834536 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.044219 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-79ql7"] Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.045971 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-79ql7" Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.057730 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.098981 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:57 crc kubenswrapper[4811]: E0219 10:10:57.099581 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:57.599558549 +0000 UTC m=+146.126023235 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.154841 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pctn2"] Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.154913 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-79ql7"] Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.203293 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bad2100b-8b05-432c-96b8-19dbc3c5471c-catalog-content\") pod \"redhat-marketplace-79ql7\" (UID: \"bad2100b-8b05-432c-96b8-19dbc3c5471c\") " pod="openshift-marketplace/redhat-marketplace-79ql7" Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.203351 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.203423 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79jzz\" (UniqueName: \"kubernetes.io/projected/bad2100b-8b05-432c-96b8-19dbc3c5471c-kube-api-access-79jzz\") pod \"redhat-marketplace-79ql7\" (UID: \"bad2100b-8b05-432c-96b8-19dbc3c5471c\") " pod="openshift-marketplace/redhat-marketplace-79ql7" Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.203496 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bad2100b-8b05-432c-96b8-19dbc3c5471c-utilities\") pod \"redhat-marketplace-79ql7\" (UID: \"bad2100b-8b05-432c-96b8-19dbc3c5471c\") " pod="openshift-marketplace/redhat-marketplace-79ql7" Feb 19 10:10:57 crc kubenswrapper[4811]: E0219 10:10:57.203970 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:57.703951056 +0000 UTC m=+146.230415742 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:57 crc kubenswrapper[4811]: W0219 10:10:57.283743 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod006d82c1_543d_45da_8e4b_75a557df7959.slice/crio-6cac65382904eaaa4b2f739b36cd51fc1881c69291b0ef5a9d2102b775ea87ac WatchSource:0}: Error finding container 6cac65382904eaaa4b2f739b36cd51fc1881c69291b0ef5a9d2102b775ea87ac: Status 404 returned error can't find the container with id 6cac65382904eaaa4b2f739b36cd51fc1881c69291b0ef5a9d2102b775ea87ac Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.289045 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rsctk" event={"ID":"f41cc81f-afdb-461f-a860-c8abbcb84633","Type":"ContainerStarted","Data":"1a25e1799eee08660dd375889da6d5cecba4831157420d3f2e875a84524c2586"} Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.307818 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.308186 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bad2100b-8b05-432c-96b8-19dbc3c5471c-catalog-content\") pod \"redhat-marketplace-79ql7\" (UID: \"bad2100b-8b05-432c-96b8-19dbc3c5471c\") " pod="openshift-marketplace/redhat-marketplace-79ql7" Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.308271 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79jzz\" (UniqueName: \"kubernetes.io/projected/bad2100b-8b05-432c-96b8-19dbc3c5471c-kube-api-access-79jzz\") pod \"redhat-marketplace-79ql7\" (UID: \"bad2100b-8b05-432c-96b8-19dbc3c5471c\") " pod="openshift-marketplace/redhat-marketplace-79ql7" Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.308306 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bad2100b-8b05-432c-96b8-19dbc3c5471c-utilities\") pod \"redhat-marketplace-79ql7\" (UID: \"bad2100b-8b05-432c-96b8-19dbc3c5471c\") " pod="openshift-marketplace/redhat-marketplace-79ql7" Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.308929 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bad2100b-8b05-432c-96b8-19dbc3c5471c-utilities\") pod \"redhat-marketplace-79ql7\" (UID: \"bad2100b-8b05-432c-96b8-19dbc3c5471c\") " pod="openshift-marketplace/redhat-marketplace-79ql7" Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.317728 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-c46m7" event={"ID":"780ea2c0-bcb2-40bd-b6b8-c62cd877d72e","Type":"ContainerStarted","Data":"7ea74fec66d153b0e24d42a694954e61171ecaa384e86deaaa91204bd18f21ff"} Feb 19 10:10:57 crc kubenswrapper[4811]: E0219 10:10:57.318293 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:57.818244108 +0000 UTC m=+146.344708924 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.318424 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bad2100b-8b05-432c-96b8-19dbc3c5471c-catalog-content\") pod \"redhat-marketplace-79ql7\" (UID: \"bad2100b-8b05-432c-96b8-19dbc3c5471c\") " pod="openshift-marketplace/redhat-marketplace-79ql7" Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.324750 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rsctk" podStartSLOduration=121.324723451 podStartE2EDuration="2m1.324723451s" podCreationTimestamp="2026-02-19 10:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:57.318606266 +0000 UTC m=+145.845070952" watchObservedRunningTime="2026-02-19 10:10:57.324723451 +0000 UTC m=+145.851188137" Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.375578 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-9668s" event={"ID":"bd927d4d-c8dd-4c5b-a829-ab9b73ed7902","Type":"ContainerStarted","Data":"f3301f7f24686d861205d1b315448febf0623cd3d4dd7b764cf94ce303d5efde"} Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.396934 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79jzz\" (UniqueName: \"kubernetes.io/projected/bad2100b-8b05-432c-96b8-19dbc3c5471c-kube-api-access-79jzz\") pod \"redhat-marketplace-79ql7\" (UID: \"bad2100b-8b05-432c-96b8-19dbc3c5471c\") " pod="openshift-marketplace/redhat-marketplace-79ql7" Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.410725 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-79ql7" Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.412160 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:57 crc kubenswrapper[4811]: E0219 10:10:57.413996 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:57.913980269 +0000 UTC m=+146.440444955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.462309 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-9668s" podStartSLOduration=9.462095821 podStartE2EDuration="9.462095821s" podCreationTimestamp="2026-02-19 10:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:57.431028534 +0000 UTC m=+145.957493210" watchObservedRunningTime="2026-02-19 10:10:57.462095821 +0000 UTC m=+145.988560507" Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.463056 4811 patch_prober.go:28] interesting pod/router-default-5444994796-cmg4v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 10:10:57 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Feb 19 10:10:57 crc kubenswrapper[4811]: [+]process-running ok Feb 19 10:10:57 crc kubenswrapper[4811]: healthz check failed Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.464655 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cmg4v" podUID="30ea8030-3330-42dc-bc17-00a8058c7b04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.466251 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q7qnh"] Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.477848 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q7qnh" Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.518227 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:57 crc kubenswrapper[4811]: E0219 10:10:57.519067 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:58.019014761 +0000 UTC m=+146.545479447 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.523012 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f2zf8" event={"ID":"f803b9c8-002a-46a8-9eca-6b074ef7bda3","Type":"ContainerStarted","Data":"de7aa7e96c9020c3f048d1c02ed1ac816b2449cc365365d519e338cbcda43797"} Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.525456 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f2zf8" Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.535216 4811 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-f2zf8 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.535317 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f2zf8" podUID="f803b9c8-002a-46a8-9eca-6b074ef7bda3" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.542592 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l7559" event={"ID":"813c1daf-b1ec-4d19-8cb9-39b9088bb3ff","Type":"ContainerStarted","Data":"10f5878ca85402d4bde9c06bfcea1acccad9688ef57dd119acc6eadedb58db07"} Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.574809 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7qnh"] Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.575177 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f2zf8" podStartSLOduration=121.575161173 podStartE2EDuration="2m1.575161173s" podCreationTimestamp="2026-02-19 10:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:57.573256638 +0000 UTC m=+146.099721324" watchObservedRunningTime="2026-02-19 10:10:57.575161173 +0000 UTC m=+146.101625859" Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.628508 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-569hh" event={"ID":"fdd47192-fa50-42cb-8679-fba3f93df36f","Type":"ContainerStarted","Data":"8c4d2cbf634ff445b7092bc155a36ca36a93ed680011f897186acf3e5795f14a"} Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.648277 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-8jqkt" podStartSLOduration=121.648240577 podStartE2EDuration="2m1.648240577s" podCreationTimestamp="2026-02-19 10:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:57.628823117 +0000 UTC m=+146.155287813" watchObservedRunningTime="2026-02-19 10:10:57.648240577 +0000 UTC m=+146.174705263" Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.649737 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f0825a4-4e84-49d7-929e-165ced3ed2f8-catalog-content\") pod \"redhat-marketplace-q7qnh\" (UID: \"0f0825a4-4e84-49d7-929e-165ced3ed2f8\") " pod="openshift-marketplace/redhat-marketplace-q7qnh" Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.649808 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.649959 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vmvs\" (UniqueName: \"kubernetes.io/projected/0f0825a4-4e84-49d7-929e-165ced3ed2f8-kube-api-access-6vmvs\") pod \"redhat-marketplace-q7qnh\" (UID: \"0f0825a4-4e84-49d7-929e-165ced3ed2f8\") " pod="openshift-marketplace/redhat-marketplace-q7qnh" Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.650094 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f0825a4-4e84-49d7-929e-165ced3ed2f8-utilities\") pod \"redhat-marketplace-q7qnh\" (UID: \"0f0825a4-4e84-49d7-929e-165ced3ed2f8\") " pod="openshift-marketplace/redhat-marketplace-q7qnh" Feb 19 10:10:57 crc kubenswrapper[4811]: E0219 10:10:57.650539 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:58.150517791 +0000 UTC m=+146.676982477 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.650087 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ksjbn"] Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.689412 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-bsbln" event={"ID":"96e56630-78f7-4f76-9d24-25ca68e8fa0a","Type":"ContainerStarted","Data":"10fcccaa34daf73e64eae4d91eabfa2c8d0e2acd67874ad9ef58d47a2a0b8a93"} Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.691579 4811 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-8wjls container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.37:6443/healthz\": dial tcp 10.217.0.37:6443: connect: connection refused" start-of-body= Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.691620 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" podUID="2f61e647-ba04-4c85-9e4e-b58c3823b3a2" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.37:6443/healthz\": dial tcp 10.217.0.37:6443: connect: connection refused" Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.756169 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.756588 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f0825a4-4e84-49d7-929e-165ced3ed2f8-catalog-content\") pod \"redhat-marketplace-q7qnh\" (UID: \"0f0825a4-4e84-49d7-929e-165ced3ed2f8\") " pod="openshift-marketplace/redhat-marketplace-q7qnh" Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.756694 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vmvs\" (UniqueName: \"kubernetes.io/projected/0f0825a4-4e84-49d7-929e-165ced3ed2f8-kube-api-access-6vmvs\") pod \"redhat-marketplace-q7qnh\" (UID: \"0f0825a4-4e84-49d7-929e-165ced3ed2f8\") " pod="openshift-marketplace/redhat-marketplace-q7qnh" Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.756773 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f0825a4-4e84-49d7-929e-165ced3ed2f8-utilities\") pod \"redhat-marketplace-q7qnh\" (UID: \"0f0825a4-4e84-49d7-929e-165ced3ed2f8\") " pod="openshift-marketplace/redhat-marketplace-q7qnh" Feb 19 10:10:57 crc kubenswrapper[4811]: E0219 10:10:57.757773 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:58.257748706 +0000 UTC m=+146.784213382 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.758827 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f0825a4-4e84-49d7-929e-165ced3ed2f8-catalog-content\") pod \"redhat-marketplace-q7qnh\" (UID: \"0f0825a4-4e84-49d7-929e-165ced3ed2f8\") " pod="openshift-marketplace/redhat-marketplace-q7qnh" Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.800181 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f0825a4-4e84-49d7-929e-165ced3ed2f8-utilities\") pod \"redhat-marketplace-q7qnh\" (UID: \"0f0825a4-4e84-49d7-929e-165ced3ed2f8\") " pod="openshift-marketplace/redhat-marketplace-q7qnh" Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.812722 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vmvs\" (UniqueName: \"kubernetes.io/projected/0f0825a4-4e84-49d7-929e-165ced3ed2f8-kube-api-access-6vmvs\") pod \"redhat-marketplace-q7qnh\" (UID: \"0f0825a4-4e84-49d7-929e-165ced3ed2f8\") " pod="openshift-marketplace/redhat-marketplace-q7qnh" Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.860343 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.877150 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q7qnh" Feb 19 10:10:57 crc kubenswrapper[4811]: E0219 10:10:57.878317 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:58.378281916 +0000 UTC m=+146.904746602 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.894951 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r462p" podStartSLOduration=121.89491295 podStartE2EDuration="2m1.89491295s" podCreationTimestamp="2026-02-19 10:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:57.740584028 +0000 UTC m=+146.267048714" watchObservedRunningTime="2026-02-19 10:10:57.89491295 +0000 UTC m=+146.421377636" Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.898712 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cpcsw"] Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.919388 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-bsbln" podStartSLOduration=122.91935155 podStartE2EDuration="2m2.91935155s" podCreationTimestamp="2026-02-19 10:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:57.878421509 +0000 UTC m=+146.404886205" watchObservedRunningTime="2026-02-19 10:10:57.91935155 +0000 UTC m=+146.445816236" Feb 19 10:10:57 crc kubenswrapper[4811]: I0219 10:10:57.962038 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:57 crc kubenswrapper[4811]: E0219 10:10:57.962557 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:58.462529455 +0000 UTC m=+146.988994141 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.066365 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:58 crc kubenswrapper[4811]: E0219 10:10:58.066805 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:58.566788378 +0000 UTC m=+147.093253064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.066799 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5kfzw"] Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.068768 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5kfzw" Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.071845 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5kfzw"] Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.077504 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.167868 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.168134 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmb2k\" (UniqueName: \"kubernetes.io/projected/fc1f2b4f-13cd-486c-a09a-14c17b7e2c86-kube-api-access-vmb2k\") pod \"redhat-operators-5kfzw\" (UID: \"fc1f2b4f-13cd-486c-a09a-14c17b7e2c86\") " pod="openshift-marketplace/redhat-operators-5kfzw" Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.168171 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc1f2b4f-13cd-486c-a09a-14c17b7e2c86-catalog-content\") pod \"redhat-operators-5kfzw\" (UID: \"fc1f2b4f-13cd-486c-a09a-14c17b7e2c86\") " pod="openshift-marketplace/redhat-operators-5kfzw" Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.168253 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc1f2b4f-13cd-486c-a09a-14c17b7e2c86-utilities\") pod \"redhat-operators-5kfzw\" (UID: \"fc1f2b4f-13cd-486c-a09a-14c17b7e2c86\") " pod="openshift-marketplace/redhat-operators-5kfzw" Feb 19 10:10:58 crc kubenswrapper[4811]: E0219 10:10:58.168427 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:58.668401049 +0000 UTC m=+147.194865725 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.184670 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k4gqt"] Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.275492 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.275594 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmb2k\" (UniqueName: \"kubernetes.io/projected/fc1f2b4f-13cd-486c-a09a-14c17b7e2c86-kube-api-access-vmb2k\") pod \"redhat-operators-5kfzw\" (UID: \"fc1f2b4f-13cd-486c-a09a-14c17b7e2c86\") " pod="openshift-marketplace/redhat-operators-5kfzw" Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.275629 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc1f2b4f-13cd-486c-a09a-14c17b7e2c86-catalog-content\") pod \"redhat-operators-5kfzw\" (UID: \"fc1f2b4f-13cd-486c-a09a-14c17b7e2c86\") " pod="openshift-marketplace/redhat-operators-5kfzw" Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.275701 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc1f2b4f-13cd-486c-a09a-14c17b7e2c86-utilities\") pod \"redhat-operators-5kfzw\" (UID: \"fc1f2b4f-13cd-486c-a09a-14c17b7e2c86\") " pod="openshift-marketplace/redhat-operators-5kfzw" Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.276290 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc1f2b4f-13cd-486c-a09a-14c17b7e2c86-utilities\") pod \"redhat-operators-5kfzw\" (UID: \"fc1f2b4f-13cd-486c-a09a-14c17b7e2c86\") " pod="openshift-marketplace/redhat-operators-5kfzw" Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.278061 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc1f2b4f-13cd-486c-a09a-14c17b7e2c86-catalog-content\") pod \"redhat-operators-5kfzw\" (UID: \"fc1f2b4f-13cd-486c-a09a-14c17b7e2c86\") " pod="openshift-marketplace/redhat-operators-5kfzw" Feb 19 10:10:58 crc kubenswrapper[4811]: E0219 10:10:58.279131 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:58.779106676 +0000 UTC m=+147.305571532 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.336999 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmb2k\" (UniqueName: \"kubernetes.io/projected/fc1f2b4f-13cd-486c-a09a-14c17b7e2c86-kube-api-access-vmb2k\") pod \"redhat-operators-5kfzw\" (UID: \"fc1f2b4f-13cd-486c-a09a-14c17b7e2c86\") " pod="openshift-marketplace/redhat-operators-5kfzw" Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.411549 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:58 crc kubenswrapper[4811]: E0219 10:10:58.413009 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:58.912960152 +0000 UTC m=+147.439424838 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.510715 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zq68z"] Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.512175 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zq68z" Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.516925 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:58 crc kubenswrapper[4811]: E0219 10:10:58.517346 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:59.017332428 +0000 UTC m=+147.543797114 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.581224 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5kfzw" Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.618132 4811 patch_prober.go:28] interesting pod/router-default-5444994796-cmg4v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 10:10:58 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Feb 19 10:10:58 crc kubenswrapper[4811]: [+]process-running ok Feb 19 10:10:58 crc kubenswrapper[4811]: healthz check failed Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.618206 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cmg4v" podUID="30ea8030-3330-42dc-bc17-00a8058c7b04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.619796 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.620081 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f528e872-8fd6-4cfc-b4d5-129433fed8f6-utilities\") pod \"redhat-operators-zq68z\" (UID: \"f528e872-8fd6-4cfc-b4d5-129433fed8f6\") " pod="openshift-marketplace/redhat-operators-zq68z" Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.620103 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f528e872-8fd6-4cfc-b4d5-129433fed8f6-catalog-content\") pod \"redhat-operators-zq68z\" (UID: \"f528e872-8fd6-4cfc-b4d5-129433fed8f6\") " pod="openshift-marketplace/redhat-operators-zq68z" Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.620219 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krrpn\" (UniqueName: \"kubernetes.io/projected/f528e872-8fd6-4cfc-b4d5-129433fed8f6-kube-api-access-krrpn\") pod \"redhat-operators-zq68z\" (UID: \"f528e872-8fd6-4cfc-b4d5-129433fed8f6\") " pod="openshift-marketplace/redhat-operators-zq68z" Feb 19 10:10:58 crc kubenswrapper[4811]: E0219 10:10:58.620342 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:59.120323532 +0000 UTC m=+147.646788218 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.638401 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zq68z"] Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.696473 4811 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-4w6gw container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.696520 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w6gw" podUID="420c2489-d813-4281-8a38-ecc47f25b991" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.724907 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.724985 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krrpn\" (UniqueName: \"kubernetes.io/projected/f528e872-8fd6-4cfc-b4d5-129433fed8f6-kube-api-access-krrpn\") pod \"redhat-operators-zq68z\" (UID: \"f528e872-8fd6-4cfc-b4d5-129433fed8f6\") " pod="openshift-marketplace/redhat-operators-zq68z" Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.725075 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f528e872-8fd6-4cfc-b4d5-129433fed8f6-utilities\") pod \"redhat-operators-zq68z\" (UID: \"f528e872-8fd6-4cfc-b4d5-129433fed8f6\") " pod="openshift-marketplace/redhat-operators-zq68z" Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.725101 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f528e872-8fd6-4cfc-b4d5-129433fed8f6-catalog-content\") pod \"redhat-operators-zq68z\" (UID: \"f528e872-8fd6-4cfc-b4d5-129433fed8f6\") " pod="openshift-marketplace/redhat-operators-zq68z" Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.726102 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f528e872-8fd6-4cfc-b4d5-129433fed8f6-catalog-content\") pod \"redhat-operators-zq68z\" (UID: \"f528e872-8fd6-4cfc-b4d5-129433fed8f6\") " pod="openshift-marketplace/redhat-operators-zq68z" Feb 19 10:10:58 crc kubenswrapper[4811]: E0219 10:10:58.726494 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:59.22643748 +0000 UTC m=+147.752902166 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.727111 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f528e872-8fd6-4cfc-b4d5-129433fed8f6-utilities\") pod \"redhat-operators-zq68z\" (UID: \"f528e872-8fd6-4cfc-b4d5-129433fed8f6\") " pod="openshift-marketplace/redhat-operators-zq68z" Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.729138 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-kfhx5" event={"ID":"0e00367b-32b1-47e6-b543-82deeff3d190","Type":"ContainerStarted","Data":"4aa28ec04a86bf7aa17257d27ca7fce5ef7e3ee9c974285834ccd668218f912d"} Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.745279 4811 generic.go:334] "Generic (PLEG): container finished" podID="006d82c1-543d-45da-8e4b-75a557df7959" containerID="f7477d924d9b5c7ffc2de6a0d4b96875b781cf5ceea61ded5c9fdcb309c31985" exitCode=0 Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.745361 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pctn2" event={"ID":"006d82c1-543d-45da-8e4b-75a557df7959","Type":"ContainerDied","Data":"f7477d924d9b5c7ffc2de6a0d4b96875b781cf5ceea61ded5c9fdcb309c31985"} Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.745561 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pctn2" event={"ID":"006d82c1-543d-45da-8e4b-75a557df7959","Type":"ContainerStarted","Data":"6cac65382904eaaa4b2f739b36cd51fc1881c69291b0ef5a9d2102b775ea87ac"} Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.756399 4811 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.776642 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mxhh7" event={"ID":"89c5cf4e-a301-48d2-8d31-056b6afc187b","Type":"ContainerStarted","Data":"61bf5e3d5391b823a8e46dab50cd6af4b2b0de4ea3dd95bcaf4943bf0b0cd299"} Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.777758 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-mxhh7" Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.782666 4811 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mxhh7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.782730 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-mxhh7" podUID="89c5cf4e-a301-48d2-8d31-056b6afc187b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.821082 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ksjbn" event={"ID":"ca27a1be-f8a5-447f-b61f-da206a041290","Type":"ContainerStarted","Data":"6283b03238859915772456f8ecf843e140409957420ab999199a77ce3cb63a9b"} Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.828791 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:58 crc kubenswrapper[4811]: E0219 10:10:58.829375 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:59.329355082 +0000 UTC m=+147.855819778 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.837950 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-mxhh7" podStartSLOduration=122.837912325 podStartE2EDuration="2m2.837912325s" podCreationTimestamp="2026-02-19 10:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:58.814264864 +0000 UTC m=+147.340729550" watchObservedRunningTime="2026-02-19 10:10:58.837912325 +0000 UTC m=+147.364377011" Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.850625 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-kwjxq" event={"ID":"3fefb44b-8bb5-44f8-be2b-4f0ada1e3899","Type":"ContainerStarted","Data":"771306bee2591b778b7eea51a42049bfc901933dcfb96abf55a31c0449ee4661"} Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.851628 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krrpn\" (UniqueName: \"kubernetes.io/projected/f528e872-8fd6-4cfc-b4d5-129433fed8f6-kube-api-access-krrpn\") pod \"redhat-operators-zq68z\" (UID: \"f528e872-8fd6-4cfc-b4d5-129433fed8f6\") " pod="openshift-marketplace/redhat-operators-zq68z" Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.884214 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l7559" event={"ID":"813c1daf-b1ec-4d19-8cb9-39b9088bb3ff","Type":"ContainerStarted","Data":"87fd1af764cedf1535386520475446882c240092ac29a19ef54b3653b12f863e"} Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.888657 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-kwjxq" podStartSLOduration=123.888632108 podStartE2EDuration="2m3.888632108s" podCreationTimestamp="2026-02-19 10:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:58.882709168 +0000 UTC m=+147.409173854" watchObservedRunningTime="2026-02-19 10:10:58.888632108 +0000 UTC m=+147.415096784" Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.928269 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l7559" podStartSLOduration=122.928241728 podStartE2EDuration="2m2.928241728s" podCreationTimestamp="2026-02-19 10:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:58.908013288 +0000 UTC m=+147.434477974" watchObservedRunningTime="2026-02-19 10:10:58.928241728 +0000 UTC m=+147.454706404" Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.931679 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.933339 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7qnh"] Feb 19 10:10:58 crc kubenswrapper[4811]: E0219 10:10:58.934133 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:59.434113887 +0000 UTC m=+147.960578683 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.964872 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-79ql7"] Feb 19 10:10:58 crc kubenswrapper[4811]: I0219 10:10:58.988707 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zq68z" Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.014297 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cpcsw" event={"ID":"c2f288dc-ac27-4a76-84da-d3f4e2dccf77","Type":"ContainerStarted","Data":"086e0a634459f0571db3040f003db99438a3ddd37f6937a139da5c160f3a14ed"} Feb 19 10:10:59 crc kubenswrapper[4811]: W0219 10:10:59.015327 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f0825a4_4e84_49d7_929e_165ced3ed2f8.slice/crio-193ac46de448e3f035c8a84417b890634fad12dada53d4d502f8977125fbe775 WatchSource:0}: Error finding container 193ac46de448e3f035c8a84417b890634fad12dada53d4d502f8977125fbe775: Status 404 returned error can't find the container with id 193ac46de448e3f035c8a84417b890634fad12dada53d4d502f8977125fbe775 Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.028647 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-8jqkt" event={"ID":"28d91e3a-4d4d-42c7-8f03-d44e6e539e66","Type":"ContainerStarted","Data":"2ae766b4d049c3c99f4cb8e6ecea5ff69279d9736710508bc1ed3aa4f169c657"} Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.043270 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:59 crc kubenswrapper[4811]: E0219 10:10:59.044559 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:59.544518457 +0000 UTC m=+148.070983143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.045178 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:59 crc kubenswrapper[4811]: E0219 10:10:59.047434 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:59.547412885 +0000 UTC m=+148.073877571 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.094910 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-569hh" event={"ID":"fdd47192-fa50-42cb-8679-fba3f93df36f","Type":"ContainerStarted","Data":"1ff15c18be505f4a7f4e0a51e3b6a574d9639fd2fddce6e351f30e8628087897"} Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.095159 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-569hh" Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.114694 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k4gqt" event={"ID":"c4c2ec9c-52c4-442e-adf8-eddb1d74d250","Type":"ContainerStarted","Data":"01c20f173fbc9e3303a9ade69b9aa887ada0fbb98b575561ef1441f59609b045"} Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.122598 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f2zf8" Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.148773 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:59 crc kubenswrapper[4811]: E0219 10:10:59.150957 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:59.650914961 +0000 UTC m=+148.177379647 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.153574 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:59 crc kubenswrapper[4811]: E0219 10:10:59.157179 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:59.657155329 +0000 UTC m=+148.183620205 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.186474 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-569hh" podStartSLOduration=123.186433214 podStartE2EDuration="2m3.186433214s" podCreationTimestamp="2026-02-19 10:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:10:59.135956026 +0000 UTC m=+147.662420722" watchObservedRunningTime="2026-02-19 10:10:59.186433214 +0000 UTC m=+147.712897900" Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.256567 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:59 crc kubenswrapper[4811]: E0219 10:10:59.257478 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:59.757431729 +0000 UTC m=+148.283896415 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.257722 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:59 crc kubenswrapper[4811]: E0219 10:10:59.269755 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:59.76973322 +0000 UTC m=+148.296197906 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.360297 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:59 crc kubenswrapper[4811]: E0219 10:10:59.361328 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:10:59.861299243 +0000 UTC m=+148.387763929 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.366704 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5kfzw"] Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.431218 4811 patch_prober.go:28] interesting pod/router-default-5444994796-cmg4v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 10:10:59 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Feb 19 10:10:59 crc kubenswrapper[4811]: [+]process-running ok Feb 19 10:10:59 crc kubenswrapper[4811]: healthz check failed Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.431289 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cmg4v" podUID="30ea8030-3330-42dc-bc17-00a8058c7b04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.462974 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:59 crc kubenswrapper[4811]: E0219 10:10:59.463516 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:10:59.963497988 +0000 UTC m=+148.489962674 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.488687 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.490006 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.494700 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.495040 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.499103 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.535730 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-dh45w" Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.535782 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-dh45w" Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.545044 4811 patch_prober.go:28] interesting pod/console-f9d7485db-dh45w container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.545129 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-dh45w" podUID="de01cd94-d3c6-4fa5-9033-9a65a318ab82" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.550197 4811 patch_prober.go:28] interesting pod/downloads-7954f5f757-t44xg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.550264 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t44xg" podUID="47f24fcb-a536-4173-b099-9fadf4be99c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.550557 4811 patch_prober.go:28] interesting pod/downloads-7954f5f757-t44xg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.550651 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-t44xg" podUID="47f24fcb-a536-4173-b099-9fadf4be99c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.565399 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:59 crc kubenswrapper[4811]: E0219 10:10:59.566017 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:11:00.065987309 +0000 UTC m=+148.592451985 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.668376 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.668621 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e9bdfd05-247a-4614-839c-1d9a8c90b630-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e9bdfd05-247a-4614-839c-1d9a8c90b630\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.668794 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9bdfd05-247a-4614-839c-1d9a8c90b630-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e9bdfd05-247a-4614-839c-1d9a8c90b630\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 10:10:59 crc kubenswrapper[4811]: E0219 10:10:59.669439 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:11:00.169411303 +0000 UTC m=+148.695876139 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.715484 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zq68z"] Feb 19 10:10:59 crc kubenswrapper[4811]: W0219 10:10:59.769831 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf528e872_8fd6_4cfc_b4d5_129433fed8f6.slice/crio-5a4295896315f97ba8415b3fbb34f50d37ed4447bf46c218b23c1ec68efae21e WatchSource:0}: Error finding container 5a4295896315f97ba8415b3fbb34f50d37ed4447bf46c218b23c1ec68efae21e: Status 404 returned error can't find the container with id 5a4295896315f97ba8415b3fbb34f50d37ed4447bf46c218b23c1ec68efae21e Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.770059 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.770467 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e9bdfd05-247a-4614-839c-1d9a8c90b630-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e9bdfd05-247a-4614-839c-1d9a8c90b630\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.770562 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9bdfd05-247a-4614-839c-1d9a8c90b630-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e9bdfd05-247a-4614-839c-1d9a8c90b630\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.770992 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e9bdfd05-247a-4614-839c-1d9a8c90b630-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e9bdfd05-247a-4614-839c-1d9a8c90b630\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 10:10:59 crc kubenswrapper[4811]: E0219 10:10:59.771002 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:11:00.270981313 +0000 UTC m=+148.797445999 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.802681 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9bdfd05-247a-4614-839c-1d9a8c90b630-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e9bdfd05-247a-4614-839c-1d9a8c90b630\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.821294 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.872494 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:10:59 crc kubenswrapper[4811]: E0219 10:10:59.872970 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:11:00.372951102 +0000 UTC m=+148.899415788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:10:59 crc kubenswrapper[4811]: I0219 10:10:59.974748 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:10:59 crc kubenswrapper[4811]: E0219 10:10:59.975388 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:11:00.475360782 +0000 UTC m=+149.001825468 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.077251 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:11:00 crc kubenswrapper[4811]: E0219 10:11:00.077705 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:11:00.5776842 +0000 UTC m=+149.104148886 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.167310 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2ljp" event={"ID":"62dd3b62-846d-4fc2-aefe-2da8f53ac288","Type":"ContainerStarted","Data":"4f0303f25e9d5e9b09abd7f1fcb239dfa8672f3e36c3e63f9954ac1d7daecd90"} Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.178783 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:11:00 crc kubenswrapper[4811]: E0219 10:11:00.179331 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:11:00.679306821 +0000 UTC m=+149.205771507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.189367 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" event={"ID":"f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086","Type":"ContainerStarted","Data":"b59e9cb3306f24ae7af5f1e9adcfa84f12c6836895184d0ee6ddf7cb45881689"} Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.189423 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" event={"ID":"f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086","Type":"ContainerStarted","Data":"78aa5fb68de65c82e27aa3240a85cac6b299fab3e44d5271b5f116b9cb4f1802"} Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.195691 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2ljp" podStartSLOduration=124.195665489 podStartE2EDuration="2m4.195665489s" podCreationTimestamp="2026-02-19 10:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:11:00.195213518 +0000 UTC m=+148.721678204" watchObservedRunningTime="2026-02-19 10:11:00.195665489 +0000 UTC m=+148.722130175" Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.223110 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-kfhx5" event={"ID":"0e00367b-32b1-47e6-b543-82deeff3d190","Type":"ContainerStarted","Data":"ed8d2872ad9ce467e1b8ddb11b7a941768ec2d3c64341f979e392a9440edba9a"} Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.223569 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-kfhx5" Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.255825 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" podStartSLOduration=125.255785145 podStartE2EDuration="2m5.255785145s" podCreationTimestamp="2026-02-19 10:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:11:00.25347073 +0000 UTC m=+148.779935416" watchObservedRunningTime="2026-02-19 10:11:00.255785145 +0000 UTC m=+148.782249831" Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.290116 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:11:00 crc kubenswrapper[4811]: E0219 10:11:00.294371 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:11:00.794336091 +0000 UTC m=+149.320800777 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.295480 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-kfhx5" podStartSLOduration=13.295417826 podStartE2EDuration="13.295417826s" podCreationTimestamp="2026-02-19 10:10:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:11:00.287360855 +0000 UTC m=+148.813825541" watchObservedRunningTime="2026-02-19 10:11:00.295417826 +0000 UTC m=+148.821882512" Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.297860 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-c46m7" event={"ID":"780ea2c0-bcb2-40bd-b6b8-c62cd877d72e","Type":"ContainerStarted","Data":"d8436695dde7ec5e91b78e8ee07dea45007a4b11b27c298993eb9d09e8fecf72"} Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.311725 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zq68z" event={"ID":"f528e872-8fd6-4cfc-b4d5-129433fed8f6","Type":"ContainerStarted","Data":"3f9b91a97795eb2a16dc9b036061f7cd417a8a6870477d9703f04e46b76aa4cd"} Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.311791 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zq68z" event={"ID":"f528e872-8fd6-4cfc-b4d5-129433fed8f6","Type":"ContainerStarted","Data":"5a4295896315f97ba8415b3fbb34f50d37ed4447bf46c218b23c1ec68efae21e"} Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.325181 4811 generic.go:334] "Generic (PLEG): container finished" podID="bad2100b-8b05-432c-96b8-19dbc3c5471c" containerID="cdb14a6f78dc08f35d0277d3021badf851969040178e46ee5f71687e8a7f8d41" exitCode=0 Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.326638 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-79ql7" event={"ID":"bad2100b-8b05-432c-96b8-19dbc3c5471c","Type":"ContainerDied","Data":"cdb14a6f78dc08f35d0277d3021badf851969040178e46ee5f71687e8a7f8d41"} Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.326723 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-79ql7" event={"ID":"bad2100b-8b05-432c-96b8-19dbc3c5471c","Type":"ContainerStarted","Data":"15c632ccddd5c86c8b2274253a5eb66c0f76250c95da588c3f3763fed6ee99dc"} Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.335806 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-c46m7" podStartSLOduration=124.335769414 podStartE2EDuration="2m4.335769414s" podCreationTimestamp="2026-02-19 10:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:11:00.329396842 +0000 UTC m=+148.855861538" watchObservedRunningTime="2026-02-19 10:11:00.335769414 +0000 UTC m=+148.862234100" Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.346843 4811 generic.go:334] "Generic (PLEG): container finished" podID="ca27a1be-f8a5-447f-b61f-da206a041290" containerID="59356221160a5e3471812120a030d86154a810af003a1f58a51240fb86fcc7b8" exitCode=0 Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.346979 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ksjbn" event={"ID":"ca27a1be-f8a5-447f-b61f-da206a041290","Type":"ContainerDied","Data":"59356221160a5e3471812120a030d86154a810af003a1f58a51240fb86fcc7b8"} Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.409123 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:11:00 crc kubenswrapper[4811]: E0219 10:11:00.410039 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:11:00.909973034 +0000 UTC m=+149.436437710 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.410589 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:11:00 crc kubenswrapper[4811]: E0219 10:11:00.412508 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:11:00.912487824 +0000 UTC m=+149.438952720 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.439985 4811 generic.go:334] "Generic (PLEG): container finished" podID="c2f288dc-ac27-4a76-84da-d3f4e2dccf77" containerID="75dcf83f5a9e9158ccf7501167a9e07bdc17a950bdd08eaba7e6e88b5d1ae511" exitCode=0 Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.440295 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cpcsw" event={"ID":"c2f288dc-ac27-4a76-84da-d3f4e2dccf77","Type":"ContainerDied","Data":"75dcf83f5a9e9158ccf7501167a9e07bdc17a950bdd08eaba7e6e88b5d1ae511"} Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.452586 4811 patch_prober.go:28] interesting pod/router-default-5444994796-cmg4v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 10:11:00 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Feb 19 10:11:00 crc kubenswrapper[4811]: [+]process-running ok Feb 19 10:11:00 crc kubenswrapper[4811]: healthz check failed Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.452648 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cmg4v" podUID="30ea8030-3330-42dc-bc17-00a8058c7b04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.455811 4811 generic.go:334] "Generic (PLEG): container finished" podID="c4c2ec9c-52c4-442e-adf8-eddb1d74d250" containerID="9991986c30fa830272e8d3f8943dea3dcd8f153f611336ee08c147b7cfa1114e" exitCode=0 Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.455899 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k4gqt" event={"ID":"c4c2ec9c-52c4-442e-adf8-eddb1d74d250","Type":"ContainerDied","Data":"9991986c30fa830272e8d3f8943dea3dcd8f153f611336ee08c147b7cfa1114e"} Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.473619 4811 generic.go:334] "Generic (PLEG): container finished" podID="fc1f2b4f-13cd-486c-a09a-14c17b7e2c86" containerID="ff689b1038b3f3f15850c484c9e370ac4cbb811c54b261b0d7c2b13cefc35914" exitCode=0 Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.473715 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5kfzw" event={"ID":"fc1f2b4f-13cd-486c-a09a-14c17b7e2c86","Type":"ContainerDied","Data":"ff689b1038b3f3f15850c484c9e370ac4cbb811c54b261b0d7c2b13cefc35914"} Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.473769 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5kfzw" event={"ID":"fc1f2b4f-13cd-486c-a09a-14c17b7e2c86","Type":"ContainerStarted","Data":"74cb6819d01e888fed097ffe09a93b01a2f5255f0c3aa6fc47a990079771b243"} Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.512682 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:11:00 crc kubenswrapper[4811]: E0219 10:11:00.513348 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:11:01.013324266 +0000 UTC m=+149.539788952 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.513401 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ntk89" event={"ID":"a71943f3-8cb9-48ff-aaf0-38a30132ffd4","Type":"ContainerStarted","Data":"f788206ad1aac4f5036c6c3144f4df00c5c94d8b45dc5f845b416364f8bb7c4d"} Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.563207 4811 generic.go:334] "Generic (PLEG): container finished" podID="0f0825a4-4e84-49d7-929e-165ced3ed2f8" containerID="1912902d3daa2ab267307f0734427337957c7c1d7a0a71ce24bd805e0141fadf" exitCode=0 Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.563516 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7qnh" event={"ID":"0f0825a4-4e84-49d7-929e-165ced3ed2f8","Type":"ContainerDied","Data":"1912902d3daa2ab267307f0734427337957c7c1d7a0a71ce24bd805e0141fadf"} Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.563597 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7qnh" event={"ID":"0f0825a4-4e84-49d7-929e-165ced3ed2f8","Type":"ContainerStarted","Data":"193ac46de448e3f035c8a84417b890634fad12dada53d4d502f8977125fbe775"} Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.578592 4811 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mxhh7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.578685 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-mxhh7" podUID="89c5cf4e-a301-48d2-8d31-056b6afc187b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.615049 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:11:00 crc kubenswrapper[4811]: E0219 10:11:00.617399 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:11:01.117380425 +0000 UTC m=+149.643845111 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.683677 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.717421 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:11:00 crc kubenswrapper[4811]: E0219 10:11:00.719697 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:11:01.219666142 +0000 UTC m=+149.746130838 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.820841 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:11:00 crc kubenswrapper[4811]: E0219 10:11:00.821560 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:11:01.321538259 +0000 UTC m=+149.848002945 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.931254 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:11:00 crc kubenswrapper[4811]: E0219 10:11:00.932270 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:11:01.432247696 +0000 UTC m=+149.958712382 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.990161 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" Feb 19 10:11:00 crc kubenswrapper[4811]: I0219 10:11:00.990254 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" Feb 19 10:11:01 crc kubenswrapper[4811]: I0219 10:11:00.998696 4811 patch_prober.go:28] interesting pod/apiserver-76f77b778f-b5p7b container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.15:8443/livez\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 19 10:11:01 crc kubenswrapper[4811]: I0219 10:11:00.998786 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" podUID="f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.15:8443/livez\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 19 10:11:01 crc kubenswrapper[4811]: I0219 10:11:01.021209 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:11:01 crc kubenswrapper[4811]: I0219 10:11:01.039243 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:11:01 crc kubenswrapper[4811]: E0219 10:11:01.045060 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:11:01.545035902 +0000 UTC m=+150.071500588 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:01 crc kubenswrapper[4811]: I0219 10:11:01.140250 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:11:01 crc kubenswrapper[4811]: E0219 10:11:01.140745 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:11:01.640714112 +0000 UTC m=+150.167178808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:01 crc kubenswrapper[4811]: I0219 10:11:01.160169 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w6gw" Feb 19 10:11:01 crc kubenswrapper[4811]: I0219 10:11:01.243104 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:11:01 crc kubenswrapper[4811]: E0219 10:11:01.244512 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:11:01.744490935 +0000 UTC m=+150.270955621 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:01 crc kubenswrapper[4811]: I0219 10:11:01.347400 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:11:01 crc kubenswrapper[4811]: E0219 10:11:01.348142 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:11:01.848114813 +0000 UTC m=+150.374579499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:01 crc kubenswrapper[4811]: I0219 10:11:01.407085 4811 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mxhh7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Feb 19 10:11:01 crc kubenswrapper[4811]: I0219 10:11:01.407128 4811 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mxhh7 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Feb 19 10:11:01 crc kubenswrapper[4811]: I0219 10:11:01.407151 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-mxhh7" podUID="89c5cf4e-a301-48d2-8d31-056b6afc187b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Feb 19 10:11:01 crc kubenswrapper[4811]: I0219 10:11:01.407142 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-mxhh7" podUID="89c5cf4e-a301-48d2-8d31-056b6afc187b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Feb 19 10:11:01 crc kubenswrapper[4811]: I0219 10:11:01.420404 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-cmg4v" Feb 19 10:11:01 crc kubenswrapper[4811]: I0219 10:11:01.432410 4811 patch_prober.go:28] interesting pod/router-default-5444994796-cmg4v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 10:11:01 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Feb 19 10:11:01 crc kubenswrapper[4811]: [+]process-running ok Feb 19 10:11:01 crc kubenswrapper[4811]: healthz check failed Feb 19 10:11:01 crc kubenswrapper[4811]: I0219 10:11:01.432509 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cmg4v" podUID="30ea8030-3330-42dc-bc17-00a8058c7b04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 10:11:01 crc kubenswrapper[4811]: I0219 10:11:01.452919 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:11:01 crc kubenswrapper[4811]: E0219 10:11:01.453677 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:11:01.953648237 +0000 UTC m=+150.480113103 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:01 crc kubenswrapper[4811]: I0219 10:11:01.554321 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:11:01 crc kubenswrapper[4811]: E0219 10:11:01.554518 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:11:02.05448872 +0000 UTC m=+150.580953406 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:01 crc kubenswrapper[4811]: I0219 10:11:01.554908 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:11:01 crc kubenswrapper[4811]: E0219 10:11:01.555376 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:11:02.055364921 +0000 UTC m=+150.581829607 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:01 crc kubenswrapper[4811]: I0219 10:11:01.599178 4811 generic.go:334] "Generic (PLEG): container finished" podID="f528e872-8fd6-4cfc-b4d5-129433fed8f6" containerID="3f9b91a97795eb2a16dc9b036061f7cd417a8a6870477d9703f04e46b76aa4cd" exitCode=0 Feb 19 10:11:01 crc kubenswrapper[4811]: I0219 10:11:01.599572 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zq68z" event={"ID":"f528e872-8fd6-4cfc-b4d5-129433fed8f6","Type":"ContainerDied","Data":"3f9b91a97795eb2a16dc9b036061f7cd417a8a6870477d9703f04e46b76aa4cd"} Feb 19 10:11:01 crc kubenswrapper[4811]: I0219 10:11:01.645658 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e9bdfd05-247a-4614-839c-1d9a8c90b630","Type":"ContainerStarted","Data":"ec49156db15368602a9478e2132e78cec8d84e08c80dd0b29c263f30ece745ff"} Feb 19 10:11:01 crc kubenswrapper[4811]: I0219 10:11:01.645714 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e9bdfd05-247a-4614-839c-1d9a8c90b630","Type":"ContainerStarted","Data":"8c42b5db40b9a098300e5dd18ebcc4b47b00fb01be451d2471c19f4ee4829f46"} Feb 19 10:11:01 crc kubenswrapper[4811]: I0219 10:11:01.656223 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:11:01 crc kubenswrapper[4811]: E0219 10:11:01.656518 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:11:02.15648094 +0000 UTC m=+150.682945626 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:01 crc kubenswrapper[4811]: I0219 10:11:01.656882 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:11:01 crc kubenswrapper[4811]: E0219 10:11:01.657327 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:11:02.157308069 +0000 UTC m=+150.683772745 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:01 crc kubenswrapper[4811]: I0219 10:11:01.658169 4811 generic.go:334] "Generic (PLEG): container finished" podID="3fefb44b-8bb5-44f8-be2b-4f0ada1e3899" containerID="771306bee2591b778b7eea51a42049bfc901933dcfb96abf55a31c0449ee4661" exitCode=0 Feb 19 10:11:01 crc kubenswrapper[4811]: I0219 10:11:01.659123 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-kwjxq" event={"ID":"3fefb44b-8bb5-44f8-be2b-4f0ada1e3899","Type":"ContainerDied","Data":"771306bee2591b778b7eea51a42049bfc901933dcfb96abf55a31c0449ee4661"} Feb 19 10:11:01 crc kubenswrapper[4811]: I0219 10:11:01.674558 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.674525838 podStartE2EDuration="2.674525838s" podCreationTimestamp="2026-02-19 10:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:11:01.670224236 +0000 UTC m=+150.196688922" watchObservedRunningTime="2026-02-19 10:11:01.674525838 +0000 UTC m=+150.200990524" Feb 19 10:11:01 crc kubenswrapper[4811]: I0219 10:11:01.681583 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-mxhh7" Feb 19 10:11:01 crc kubenswrapper[4811]: I0219 10:11:01.761238 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:11:01 crc kubenswrapper[4811]: E0219 10:11:01.766385 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:11:02.266357637 +0000 UTC m=+150.792822323 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:01 crc kubenswrapper[4811]: I0219 10:11:01.874482 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:11:01 crc kubenswrapper[4811]: E0219 10:11:01.875043 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:11:02.375014115 +0000 UTC m=+150.901478991 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:01 crc kubenswrapper[4811]: I0219 10:11:01.979867 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:11:01 crc kubenswrapper[4811]: E0219 10:11:01.980223 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:11:02.48014666 +0000 UTC m=+151.006611346 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:01 crc kubenswrapper[4811]: I0219 10:11:01.980995 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:11:01 crc kubenswrapper[4811]: E0219 10:11:01.981545 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:11:02.481517992 +0000 UTC m=+151.007982678 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:02 crc kubenswrapper[4811]: I0219 10:11:02.083471 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:11:02 crc kubenswrapper[4811]: E0219 10:11:02.083685 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:11:02.583659276 +0000 UTC m=+151.110123962 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:02 crc kubenswrapper[4811]: I0219 10:11:02.083922 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:11:02 crc kubenswrapper[4811]: E0219 10:11:02.084329 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:11:02.584321121 +0000 UTC m=+151.110785807 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:02 crc kubenswrapper[4811]: I0219 10:11:02.188130 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:11:02 crc kubenswrapper[4811]: E0219 10:11:02.188619 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:11:02.688595615 +0000 UTC m=+151.215060301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:02 crc kubenswrapper[4811]: I0219 10:11:02.290264 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:11:02 crc kubenswrapper[4811]: E0219 10:11:02.290932 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:11:02.790902243 +0000 UTC m=+151.317367109 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:02 crc kubenswrapper[4811]: I0219 10:11:02.398729 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:11:02 crc kubenswrapper[4811]: E0219 10:11:02.398990 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:11:02.898939276 +0000 UTC m=+151.425403962 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:02 crc kubenswrapper[4811]: I0219 10:11:02.399302 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:11:02 crc kubenswrapper[4811]: E0219 10:11:02.399781 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:11:02.899762416 +0000 UTC m=+151.426227102 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:02 crc kubenswrapper[4811]: I0219 10:11:02.426186 4811 patch_prober.go:28] interesting pod/router-default-5444994796-cmg4v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 10:11:02 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Feb 19 10:11:02 crc kubenswrapper[4811]: [+]process-running ok Feb 19 10:11:02 crc kubenswrapper[4811]: healthz check failed Feb 19 10:11:02 crc kubenswrapper[4811]: I0219 10:11:02.426280 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cmg4v" podUID="30ea8030-3330-42dc-bc17-00a8058c7b04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 10:11:02 crc kubenswrapper[4811]: I0219 10:11:02.501114 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:11:02 crc kubenswrapper[4811]: E0219 10:11:02.501292 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:11:03.001261214 +0000 UTC m=+151.527725900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:02 crc kubenswrapper[4811]: I0219 10:11:02.501328 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:11:02 crc kubenswrapper[4811]: E0219 10:11:02.501937 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:11:03.0019284 +0000 UTC m=+151.528393086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:02 crc kubenswrapper[4811]: I0219 10:11:02.602334 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:11:02 crc kubenswrapper[4811]: E0219 10:11:02.602514 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:11:03.102486776 +0000 UTC m=+151.628951462 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:02 crc kubenswrapper[4811]: I0219 10:11:02.602870 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:11:02 crc kubenswrapper[4811]: E0219 10:11:02.603404 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:11:03.103383047 +0000 UTC m=+151.629847733 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:02 crc kubenswrapper[4811]: I0219 10:11:02.699888 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ntk89" event={"ID":"a71943f3-8cb9-48ff-aaf0-38a30132ffd4","Type":"ContainerStarted","Data":"95d3ba57824daa0cd25daf7249914f8c32f6439d144266c68959f26319978089"} Feb 19 10:11:02 crc kubenswrapper[4811]: I0219 10:11:02.704573 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:11:02 crc kubenswrapper[4811]: E0219 10:11:02.705174 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:11:03.205130211 +0000 UTC m=+151.731594897 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:02 crc kubenswrapper[4811]: I0219 10:11:02.708726 4811 generic.go:334] "Generic (PLEG): container finished" podID="e9bdfd05-247a-4614-839c-1d9a8c90b630" containerID="ec49156db15368602a9478e2132e78cec8d84e08c80dd0b29c263f30ece745ff" exitCode=0 Feb 19 10:11:02 crc kubenswrapper[4811]: I0219 10:11:02.708802 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e9bdfd05-247a-4614-839c-1d9a8c90b630","Type":"ContainerDied","Data":"ec49156db15368602a9478e2132e78cec8d84e08c80dd0b29c263f30ece745ff"} Feb 19 10:11:02 crc kubenswrapper[4811]: I0219 10:11:02.807260 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:11:02 crc kubenswrapper[4811]: E0219 10:11:02.807583 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:11:03.307559651 +0000 UTC m=+151.834024547 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:02 crc kubenswrapper[4811]: I0219 10:11:02.913186 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:11:02 crc kubenswrapper[4811]: E0219 10:11:02.913456 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:11:03.413400263 +0000 UTC m=+151.939864939 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:02 crc kubenswrapper[4811]: I0219 10:11:02.914194 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:11:02 crc kubenswrapper[4811]: E0219 10:11:02.914634 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:11:03.414617382 +0000 UTC m=+151.941082068 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:03 crc kubenswrapper[4811]: I0219 10:11:03.015600 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:11:03 crc kubenswrapper[4811]: E0219 10:11:03.016217 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:11:03.516194732 +0000 UTC m=+152.042659418 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:03 crc kubenswrapper[4811]: I0219 10:11:03.117619 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:11:03 crc kubenswrapper[4811]: E0219 10:11:03.118150 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:11:03.618134259 +0000 UTC m=+152.144598945 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:03 crc kubenswrapper[4811]: I0219 10:11:03.219310 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:11:03 crc kubenswrapper[4811]: E0219 10:11:03.219718 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:11:03.719677749 +0000 UTC m=+152.246142425 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:03 crc kubenswrapper[4811]: I0219 10:11:03.219921 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:11:03 crc kubenswrapper[4811]: E0219 10:11:03.220374 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:11:03.720365005 +0000 UTC m=+152.246829691 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:03 crc kubenswrapper[4811]: I0219 10:11:03.325648 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:11:03 crc kubenswrapper[4811]: E0219 10:11:03.326198 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:11:03.826171926 +0000 UTC m=+152.352636612 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:03 crc kubenswrapper[4811]: I0219 10:11:03.428227 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:11:03 crc kubenswrapper[4811]: E0219 10:11:03.428937 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:11:03.928908993 +0000 UTC m=+152.455373869 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:03 crc kubenswrapper[4811]: I0219 10:11:03.444587 4811 patch_prober.go:28] interesting pod/router-default-5444994796-cmg4v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 10:11:03 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Feb 19 10:11:03 crc kubenswrapper[4811]: [+]process-running ok Feb 19 10:11:03 crc kubenswrapper[4811]: healthz check failed Feb 19 10:11:03 crc kubenswrapper[4811]: I0219 10:11:03.444695 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cmg4v" podUID="30ea8030-3330-42dc-bc17-00a8058c7b04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 10:11:03 crc kubenswrapper[4811]: I0219 10:11:03.444894 4811 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 19 10:11:03 crc kubenswrapper[4811]: I0219 10:11:03.530370 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:11:03 crc kubenswrapper[4811]: E0219 10:11:03.530899 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:11:04.030850142 +0000 UTC m=+152.557314838 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:03 crc kubenswrapper[4811]: I0219 10:11:03.632853 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:11:03 crc kubenswrapper[4811]: E0219 10:11:03.633374 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:11:04.133357034 +0000 UTC m=+152.659821720 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:03 crc kubenswrapper[4811]: I0219 10:11:03.670161 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-kwjxq" Feb 19 10:11:03 crc kubenswrapper[4811]: I0219 10:11:03.734299 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:11:03 crc kubenswrapper[4811]: I0219 10:11:03.734878 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqvst\" (UniqueName: \"kubernetes.io/projected/3fefb44b-8bb5-44f8-be2b-4f0ada1e3899-kube-api-access-wqvst\") pod \"3fefb44b-8bb5-44f8-be2b-4f0ada1e3899\" (UID: \"3fefb44b-8bb5-44f8-be2b-4f0ada1e3899\") " Feb 19 10:11:03 crc kubenswrapper[4811]: I0219 10:11:03.734975 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3fefb44b-8bb5-44f8-be2b-4f0ada1e3899-secret-volume\") pod \"3fefb44b-8bb5-44f8-be2b-4f0ada1e3899\" (UID: \"3fefb44b-8bb5-44f8-be2b-4f0ada1e3899\") " Feb 19 10:11:03 crc kubenswrapper[4811]: I0219 10:11:03.735059 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3fefb44b-8bb5-44f8-be2b-4f0ada1e3899-config-volume\") pod \"3fefb44b-8bb5-44f8-be2b-4f0ada1e3899\" (UID: \"3fefb44b-8bb5-44f8-be2b-4f0ada1e3899\") " Feb 19 10:11:03 crc kubenswrapper[4811]: I0219 10:11:03.735323 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:11:03 crc kubenswrapper[4811]: I0219 10:11:03.735394 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:11:03 crc kubenswrapper[4811]: I0219 10:11:03.738244 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fefb44b-8bb5-44f8-be2b-4f0ada1e3899-config-volume" (OuterVolumeSpecName: "config-volume") pod "3fefb44b-8bb5-44f8-be2b-4f0ada1e3899" (UID: "3fefb44b-8bb5-44f8-be2b-4f0ada1e3899"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:11:03 crc kubenswrapper[4811]: E0219 10:11:03.739131 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:11:04.239071502 +0000 UTC m=+152.765536188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:03 crc kubenswrapper[4811]: I0219 10:11:03.739865 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:11:03 crc kubenswrapper[4811]: I0219 10:11:03.755085 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:11:03 crc kubenswrapper[4811]: I0219 10:11:03.759778 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-kwjxq" Feb 19 10:11:03 crc kubenswrapper[4811]: I0219 10:11:03.760317 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524920-kwjxq" event={"ID":"3fefb44b-8bb5-44f8-be2b-4f0ada1e3899","Type":"ContainerDied","Data":"3e75176f9464df95c7e89e8b95065f2afbf16a835a9ba29faa6144d7b7efbef9"} Feb 19 10:11:03 crc kubenswrapper[4811]: I0219 10:11:03.760357 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e75176f9464df95c7e89e8b95065f2afbf16a835a9ba29faa6144d7b7efbef9" Feb 19 10:11:03 crc kubenswrapper[4811]: I0219 10:11:03.785249 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fefb44b-8bb5-44f8-be2b-4f0ada1e3899-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3fefb44b-8bb5-44f8-be2b-4f0ada1e3899" (UID: "3fefb44b-8bb5-44f8-be2b-4f0ada1e3899"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:11:03 crc kubenswrapper[4811]: I0219 10:11:03.785926 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fefb44b-8bb5-44f8-be2b-4f0ada1e3899-kube-api-access-wqvst" (OuterVolumeSpecName: "kube-api-access-wqvst") pod "3fefb44b-8bb5-44f8-be2b-4f0ada1e3899" (UID: "3fefb44b-8bb5-44f8-be2b-4f0ada1e3899"). InnerVolumeSpecName "kube-api-access-wqvst". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:11:03 crc kubenswrapper[4811]: I0219 10:11:03.787230 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 10:11:03 crc kubenswrapper[4811]: I0219 10:11:03.839518 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:11:03 crc kubenswrapper[4811]: I0219 10:11:03.839601 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:11:03 crc kubenswrapper[4811]: I0219 10:11:03.839668 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:11:03 crc kubenswrapper[4811]: I0219 10:11:03.839763 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqvst\" (UniqueName: \"kubernetes.io/projected/3fefb44b-8bb5-44f8-be2b-4f0ada1e3899-kube-api-access-wqvst\") on node \"crc\" DevicePath \"\"" Feb 19 10:11:03 crc kubenswrapper[4811]: I0219 10:11:03.839777 4811 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3fefb44b-8bb5-44f8-be2b-4f0ada1e3899-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 10:11:03 crc kubenswrapper[4811]: I0219 10:11:03.839794 4811 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3fefb44b-8bb5-44f8-be2b-4f0ada1e3899-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 10:11:03 crc kubenswrapper[4811]: E0219 10:11:03.853173 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:11:04.353132579 +0000 UTC m=+152.879597265 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:03 crc kubenswrapper[4811]: I0219 10:11:03.864516 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:11:03 crc kubenswrapper[4811]: I0219 10:11:03.874383 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:11:03 crc kubenswrapper[4811]: I0219 10:11:03.951777 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:11:03 crc kubenswrapper[4811]: E0219 10:11:03.952301 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 10:11:04.452275511 +0000 UTC m=+152.978740197 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:04 crc kubenswrapper[4811]: I0219 10:11:04.016173 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:11:04 crc kubenswrapper[4811]: I0219 10:11:04.053595 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:11:04 crc kubenswrapper[4811]: E0219 10:11:04.054103 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 10:11:04.554087017 +0000 UTC m=+153.080551703 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-p4kkk" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 10:11:04 crc kubenswrapper[4811]: I0219 10:11:04.099527 4811 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-19T10:11:03.444925973Z","Handler":null,"Name":""} Feb 19 10:11:04 crc kubenswrapper[4811]: I0219 10:11:04.106221 4811 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 19 10:11:04 crc kubenswrapper[4811]: I0219 10:11:04.106271 4811 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 19 10:11:04 crc kubenswrapper[4811]: I0219 10:11:04.146757 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 10:11:04 crc kubenswrapper[4811]: I0219 10:11:04.162333 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 10:11:04 crc kubenswrapper[4811]: I0219 10:11:04.180027 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 10:11:04 crc kubenswrapper[4811]: I0219 10:11:04.264599 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:11:04 crc kubenswrapper[4811]: I0219 10:11:04.274855 4811 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 10:11:04 crc kubenswrapper[4811]: I0219 10:11:04.274928 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:11:04 crc kubenswrapper[4811]: I0219 10:11:04.313854 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 10:11:04 crc kubenswrapper[4811]: I0219 10:11:04.366671 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e9bdfd05-247a-4614-839c-1d9a8c90b630-kubelet-dir\") pod \"e9bdfd05-247a-4614-839c-1d9a8c90b630\" (UID: \"e9bdfd05-247a-4614-839c-1d9a8c90b630\") " Feb 19 10:11:04 crc kubenswrapper[4811]: I0219 10:11:04.366830 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9bdfd05-247a-4614-839c-1d9a8c90b630-kube-api-access\") pod \"e9bdfd05-247a-4614-839c-1d9a8c90b630\" (UID: \"e9bdfd05-247a-4614-839c-1d9a8c90b630\") " Feb 19 10:11:04 crc kubenswrapper[4811]: I0219 10:11:04.366788 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9bdfd05-247a-4614-839c-1d9a8c90b630-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e9bdfd05-247a-4614-839c-1d9a8c90b630" (UID: "e9bdfd05-247a-4614-839c-1d9a8c90b630"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:11:04 crc kubenswrapper[4811]: I0219 10:11:04.391762 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-p4kkk\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:11:04 crc kubenswrapper[4811]: I0219 10:11:04.404742 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9bdfd05-247a-4614-839c-1d9a8c90b630-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e9bdfd05-247a-4614-839c-1d9a8c90b630" (UID: "e9bdfd05-247a-4614-839c-1d9a8c90b630"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:11:04 crc kubenswrapper[4811]: I0219 10:11:04.434752 4811 patch_prober.go:28] interesting pod/router-default-5444994796-cmg4v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 10:11:04 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Feb 19 10:11:04 crc kubenswrapper[4811]: [+]process-running ok Feb 19 10:11:04 crc kubenswrapper[4811]: healthz check failed Feb 19 10:11:04 crc kubenswrapper[4811]: I0219 10:11:04.434836 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cmg4v" podUID="30ea8030-3330-42dc-bc17-00a8058c7b04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 10:11:04 crc kubenswrapper[4811]: I0219 10:11:04.477970 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9bdfd05-247a-4614-839c-1d9a8c90b630-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 10:11:04 crc kubenswrapper[4811]: I0219 10:11:04.478544 4811 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e9bdfd05-247a-4614-839c-1d9a8c90b630-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 10:11:04 crc kubenswrapper[4811]: I0219 10:11:04.561121 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:11:04 crc kubenswrapper[4811]: I0219 10:11:04.754224 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 19 10:11:04 crc kubenswrapper[4811]: I0219 10:11:04.802326 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ntk89" event={"ID":"a71943f3-8cb9-48ff-aaf0-38a30132ffd4","Type":"ContainerStarted","Data":"9ee5566d0997b76f584aa6de2a12b3c76b30bdf364cae9e453d9faf74ba3bcf7"} Feb 19 10:11:04 crc kubenswrapper[4811]: I0219 10:11:04.813166 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e9bdfd05-247a-4614-839c-1d9a8c90b630","Type":"ContainerDied","Data":"8c42b5db40b9a098300e5dd18ebcc4b47b00fb01be451d2471c19f4ee4829f46"} Feb 19 10:11:04 crc kubenswrapper[4811]: I0219 10:11:04.813252 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c42b5db40b9a098300e5dd18ebcc4b47b00fb01be451d2471c19f4ee4829f46" Feb 19 10:11:04 crc kubenswrapper[4811]: I0219 10:11:04.813391 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 10:11:04 crc kubenswrapper[4811]: W0219 10:11:04.926810 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-df1a992fa96a399b5d4499fe2fbe9ff096583f7ee12b582b9f6c6fa798d83ea0 WatchSource:0}: Error finding container df1a992fa96a399b5d4499fe2fbe9ff096583f7ee12b582b9f6c6fa798d83ea0: Status 404 returned error can't find the container with id df1a992fa96a399b5d4499fe2fbe9ff096583f7ee12b582b9f6c6fa798d83ea0 Feb 19 10:11:04 crc kubenswrapper[4811]: W0219 10:11:04.941498 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-83915cd691168c493705b6c8a591d0026d56c6e08436e046e4949a1ae5c88ccc WatchSource:0}: Error finding container 83915cd691168c493705b6c8a591d0026d56c6e08436e046e4949a1ae5c88ccc: Status 404 returned error can't find the container with id 83915cd691168c493705b6c8a591d0026d56c6e08436e046e4949a1ae5c88ccc Feb 19 10:11:05 crc kubenswrapper[4811]: W0219 10:11:05.180086 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-e6f06184748fec348a93a6e58e6416de3fbd442e2ba35cf7e4e79b254d747062 WatchSource:0}: Error finding container e6f06184748fec348a93a6e58e6416de3fbd442e2ba35cf7e4e79b254d747062: Status 404 returned error can't find the container with id e6f06184748fec348a93a6e58e6416de3fbd442e2ba35cf7e4e79b254d747062 Feb 19 10:11:05 crc kubenswrapper[4811]: I0219 10:11:05.430215 4811 patch_prober.go:28] interesting pod/router-default-5444994796-cmg4v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 10:11:05 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Feb 19 10:11:05 crc kubenswrapper[4811]: [+]process-running ok Feb 19 10:11:05 crc kubenswrapper[4811]: healthz check failed Feb 19 10:11:05 crc kubenswrapper[4811]: I0219 10:11:05.430377 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cmg4v" podUID="30ea8030-3330-42dc-bc17-00a8058c7b04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 10:11:05 crc kubenswrapper[4811]: I0219 10:11:05.451345 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-p4kkk"] Feb 19 10:11:05 crc kubenswrapper[4811]: W0219 10:11:05.545783 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9c16aff_60a4_404a_b3f6_bc951d28e931.slice/crio-de481ca995a290e0279ace06a4fd1b03b2f783eb4c9b85c17e32501f8760a734 WatchSource:0}: Error finding container de481ca995a290e0279ace06a4fd1b03b2f783eb4c9b85c17e32501f8760a734: Status 404 returned error can't find the container with id de481ca995a290e0279ace06a4fd1b03b2f783eb4c9b85c17e32501f8760a734 Feb 19 10:11:05 crc kubenswrapper[4811]: I0219 10:11:05.840584 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" event={"ID":"d9c16aff-60a4-404a-b3f6-bc951d28e931","Type":"ContainerStarted","Data":"de481ca995a290e0279ace06a4fd1b03b2f783eb4c9b85c17e32501f8760a734"} Feb 19 10:11:05 crc kubenswrapper[4811]: I0219 10:11:05.844572 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"501d6c49b8a643357291e9bafc5ab10e75eea17f5edb214130057aec1eccda27"} Feb 19 10:11:05 crc kubenswrapper[4811]: I0219 10:11:05.844600 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"83915cd691168c493705b6c8a591d0026d56c6e08436e046e4949a1ae5c88ccc"} Feb 19 10:11:05 crc kubenswrapper[4811]: I0219 10:11:05.845366 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:11:05 crc kubenswrapper[4811]: I0219 10:11:05.869125 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f329f43f62e3eeb88f7d1ab5517dde742cc439c81226045c892c783252d4aa76"} Feb 19 10:11:05 crc kubenswrapper[4811]: I0219 10:11:05.869203 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e6f06184748fec348a93a6e58e6416de3fbd442e2ba35cf7e4e79b254d747062"} Feb 19 10:11:05 crc kubenswrapper[4811]: I0219 10:11:05.923948 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-ntk89" event={"ID":"a71943f3-8cb9-48ff-aaf0-38a30132ffd4","Type":"ContainerStarted","Data":"1745102271a455311e213431fd9a6e2e67716f5d75f1b2727007accec2a0bb75"} Feb 19 10:11:05 crc kubenswrapper[4811]: I0219 10:11:05.974107 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"21f9781e71cd64dd2afaf314f1f706c74ed88c95942557a302d38f40d1eb8320"} Feb 19 10:11:05 crc kubenswrapper[4811]: I0219 10:11:05.974182 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"df1a992fa96a399b5d4499fe2fbe9ff096583f7ee12b582b9f6c6fa798d83ea0"} Feb 19 10:11:05 crc kubenswrapper[4811]: I0219 10:11:05.974957 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-ntk89" podStartSLOduration=17.974921022 podStartE2EDuration="17.974921022s" podCreationTimestamp="2026-02-19 10:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:11:05.968969571 +0000 UTC m=+154.495434267" watchObservedRunningTime="2026-02-19 10:11:05.974921022 +0000 UTC m=+154.501385708" Feb 19 10:11:06 crc kubenswrapper[4811]: I0219 10:11:06.029124 4811 patch_prober.go:28] interesting pod/apiserver-76f77b778f-b5p7b container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 19 10:11:06 crc kubenswrapper[4811]: [+]log ok Feb 19 10:11:06 crc kubenswrapper[4811]: [+]etcd ok Feb 19 10:11:06 crc kubenswrapper[4811]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 19 10:11:06 crc kubenswrapper[4811]: [+]poststarthook/generic-apiserver-start-informers ok Feb 19 10:11:06 crc kubenswrapper[4811]: [+]poststarthook/max-in-flight-filter ok Feb 19 10:11:06 crc kubenswrapper[4811]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 19 10:11:06 crc kubenswrapper[4811]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 19 10:11:06 crc kubenswrapper[4811]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 19 10:11:06 crc kubenswrapper[4811]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Feb 19 10:11:06 crc kubenswrapper[4811]: [+]poststarthook/project.openshift.io-projectcache ok Feb 19 10:11:06 crc kubenswrapper[4811]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 19 10:11:06 crc kubenswrapper[4811]: [+]poststarthook/openshift.io-startinformers ok Feb 19 10:11:06 crc kubenswrapper[4811]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 19 10:11:06 crc kubenswrapper[4811]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 19 10:11:06 crc kubenswrapper[4811]: livez check failed Feb 19 10:11:06 crc kubenswrapper[4811]: I0219 10:11:06.029227 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" podUID="f05c4a1f-e5c9-4f1f-ae3e-b9faecdf1086" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 10:11:06 crc kubenswrapper[4811]: I0219 10:11:06.437372 4811 patch_prober.go:28] interesting pod/router-default-5444994796-cmg4v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 10:11:06 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Feb 19 10:11:06 crc kubenswrapper[4811]: [+]process-running ok Feb 19 10:11:06 crc kubenswrapper[4811]: healthz check failed Feb 19 10:11:06 crc kubenswrapper[4811]: I0219 10:11:06.437818 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cmg4v" podUID="30ea8030-3330-42dc-bc17-00a8058c7b04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 10:11:07 crc kubenswrapper[4811]: I0219 10:11:07.062150 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" event={"ID":"d9c16aff-60a4-404a-b3f6-bc951d28e931","Type":"ContainerStarted","Data":"7afb590705614f8176e5162731cfee9190cae16955eddea51e58a9bc3fbc9250"} Feb 19 10:11:07 crc kubenswrapper[4811]: I0219 10:11:07.062316 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:11:07 crc kubenswrapper[4811]: I0219 10:11:07.102307 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" podStartSLOduration=132.10227228 podStartE2EDuration="2m12.10227228s" podCreationTimestamp="2026-02-19 10:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:11:07.102135366 +0000 UTC m=+155.628600052" watchObservedRunningTime="2026-02-19 10:11:07.10227228 +0000 UTC m=+155.628736966" Feb 19 10:11:07 crc kubenswrapper[4811]: I0219 10:11:07.450768 4811 patch_prober.go:28] interesting pod/router-default-5444994796-cmg4v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 10:11:07 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Feb 19 10:11:07 crc kubenswrapper[4811]: [+]process-running ok Feb 19 10:11:07 crc kubenswrapper[4811]: healthz check failed Feb 19 10:11:07 crc kubenswrapper[4811]: I0219 10:11:07.450850 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cmg4v" podUID="30ea8030-3330-42dc-bc17-00a8058c7b04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 10:11:07 crc kubenswrapper[4811]: I0219 10:11:07.835850 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 10:11:07 crc kubenswrapper[4811]: E0219 10:11:07.836208 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9bdfd05-247a-4614-839c-1d9a8c90b630" containerName="pruner" Feb 19 10:11:07 crc kubenswrapper[4811]: I0219 10:11:07.836224 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9bdfd05-247a-4614-839c-1d9a8c90b630" containerName="pruner" Feb 19 10:11:07 crc kubenswrapper[4811]: E0219 10:11:07.836235 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fefb44b-8bb5-44f8-be2b-4f0ada1e3899" containerName="collect-profiles" Feb 19 10:11:07 crc kubenswrapper[4811]: I0219 10:11:07.836241 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fefb44b-8bb5-44f8-be2b-4f0ada1e3899" containerName="collect-profiles" Feb 19 10:11:07 crc kubenswrapper[4811]: I0219 10:11:07.836393 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9bdfd05-247a-4614-839c-1d9a8c90b630" containerName="pruner" Feb 19 10:11:07 crc kubenswrapper[4811]: I0219 10:11:07.836414 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fefb44b-8bb5-44f8-be2b-4f0ada1e3899" containerName="collect-profiles" Feb 19 10:11:07 crc kubenswrapper[4811]: I0219 10:11:07.837036 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 10:11:07 crc kubenswrapper[4811]: I0219 10:11:07.839916 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 10:11:07 crc kubenswrapper[4811]: I0219 10:11:07.840259 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 10:11:07 crc kubenswrapper[4811]: I0219 10:11:07.854577 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 10:11:07 crc kubenswrapper[4811]: I0219 10:11:07.924556 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af5c10ef-4820-4bac-9788-a547bf06fc20-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"af5c10ef-4820-4bac-9788-a547bf06fc20\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 10:11:07 crc kubenswrapper[4811]: I0219 10:11:07.924668 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af5c10ef-4820-4bac-9788-a547bf06fc20-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"af5c10ef-4820-4bac-9788-a547bf06fc20\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 10:11:08 crc kubenswrapper[4811]: I0219 10:11:08.027135 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af5c10ef-4820-4bac-9788-a547bf06fc20-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"af5c10ef-4820-4bac-9788-a547bf06fc20\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 10:11:08 crc kubenswrapper[4811]: I0219 10:11:08.027045 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af5c10ef-4820-4bac-9788-a547bf06fc20-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"af5c10ef-4820-4bac-9788-a547bf06fc20\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 10:11:08 crc kubenswrapper[4811]: I0219 10:11:08.027264 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af5c10ef-4820-4bac-9788-a547bf06fc20-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"af5c10ef-4820-4bac-9788-a547bf06fc20\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 10:11:08 crc kubenswrapper[4811]: I0219 10:11:08.062097 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af5c10ef-4820-4bac-9788-a547bf06fc20-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"af5c10ef-4820-4bac-9788-a547bf06fc20\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 10:11:08 crc kubenswrapper[4811]: I0219 10:11:08.187857 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 10:11:08 crc kubenswrapper[4811]: I0219 10:11:08.428500 4811 patch_prober.go:28] interesting pod/router-default-5444994796-cmg4v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 10:11:08 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Feb 19 10:11:08 crc kubenswrapper[4811]: [+]process-running ok Feb 19 10:11:08 crc kubenswrapper[4811]: healthz check failed Feb 19 10:11:08 crc kubenswrapper[4811]: I0219 10:11:08.428615 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cmg4v" podUID="30ea8030-3330-42dc-bc17-00a8058c7b04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 10:11:08 crc kubenswrapper[4811]: I0219 10:11:08.929806 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 10:11:09 crc kubenswrapper[4811]: I0219 10:11:09.104078 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"af5c10ef-4820-4bac-9788-a547bf06fc20","Type":"ContainerStarted","Data":"c355f5badfdfc3a08fbc22c22c641d4597c08e98a9f7a87e18a552a7dd42b241"} Feb 19 10:11:09 crc kubenswrapper[4811]: I0219 10:11:09.211110 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-kfhx5" Feb 19 10:11:09 crc kubenswrapper[4811]: I0219 10:11:09.426118 4811 patch_prober.go:28] interesting pod/router-default-5444994796-cmg4v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 10:11:09 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Feb 19 10:11:09 crc kubenswrapper[4811]: [+]process-running ok Feb 19 10:11:09 crc kubenswrapper[4811]: healthz check failed Feb 19 10:11:09 crc kubenswrapper[4811]: I0219 10:11:09.426328 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cmg4v" podUID="30ea8030-3330-42dc-bc17-00a8058c7b04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 10:11:09 crc kubenswrapper[4811]: I0219 10:11:09.538074 4811 patch_prober.go:28] interesting pod/console-f9d7485db-dh45w container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 19 10:11:09 crc kubenswrapper[4811]: I0219 10:11:09.538188 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-dh45w" podUID="de01cd94-d3c6-4fa5-9033-9a65a318ab82" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 19 10:11:09 crc kubenswrapper[4811]: I0219 10:11:09.550662 4811 patch_prober.go:28] interesting pod/downloads-7954f5f757-t44xg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 19 10:11:09 crc kubenswrapper[4811]: I0219 10:11:09.550693 4811 patch_prober.go:28] interesting pod/downloads-7954f5f757-t44xg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 19 10:11:09 crc kubenswrapper[4811]: I0219 10:11:09.550784 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-t44xg" podUID="47f24fcb-a536-4173-b099-9fadf4be99c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 19 10:11:09 crc kubenswrapper[4811]: I0219 10:11:09.550783 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t44xg" podUID="47f24fcb-a536-4173-b099-9fadf4be99c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 19 10:11:10 crc kubenswrapper[4811]: I0219 10:11:10.172054 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"af5c10ef-4820-4bac-9788-a547bf06fc20","Type":"ContainerStarted","Data":"497a6fd4311944741b3abfbe1524cc191c26d5a8aa7d3846c2affa5223ec26e2"} Feb 19 10:11:10 crc kubenswrapper[4811]: I0219 10:11:10.422407 4811 patch_prober.go:28] interesting pod/router-default-5444994796-cmg4v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 10:11:10 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Feb 19 10:11:10 crc kubenswrapper[4811]: [+]process-running ok Feb 19 10:11:10 crc kubenswrapper[4811]: healthz check failed Feb 19 10:11:10 crc kubenswrapper[4811]: I0219 10:11:10.422518 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cmg4v" podUID="30ea8030-3330-42dc-bc17-00a8058c7b04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 10:11:10 crc kubenswrapper[4811]: I0219 10:11:10.996304 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" Feb 19 10:11:11 crc kubenswrapper[4811]: I0219 10:11:11.001573 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-b5p7b" Feb 19 10:11:11 crc kubenswrapper[4811]: I0219 10:11:11.023324 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=4.023298143 podStartE2EDuration="4.023298143s" podCreationTimestamp="2026-02-19 10:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:11:10.20299063 +0000 UTC m=+158.729455326" watchObservedRunningTime="2026-02-19 10:11:11.023298143 +0000 UTC m=+159.549762819" Feb 19 10:11:11 crc kubenswrapper[4811]: I0219 10:11:11.427971 4811 patch_prober.go:28] interesting pod/router-default-5444994796-cmg4v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 10:11:11 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Feb 19 10:11:11 crc kubenswrapper[4811]: [+]process-running ok Feb 19 10:11:11 crc kubenswrapper[4811]: healthz check failed Feb 19 10:11:11 crc kubenswrapper[4811]: I0219 10:11:11.428039 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cmg4v" podUID="30ea8030-3330-42dc-bc17-00a8058c7b04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 10:11:12 crc kubenswrapper[4811]: I0219 10:11:12.264679 4811 generic.go:334] "Generic (PLEG): container finished" podID="af5c10ef-4820-4bac-9788-a547bf06fc20" containerID="497a6fd4311944741b3abfbe1524cc191c26d5a8aa7d3846c2affa5223ec26e2" exitCode=0 Feb 19 10:11:12 crc kubenswrapper[4811]: I0219 10:11:12.265158 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"af5c10ef-4820-4bac-9788-a547bf06fc20","Type":"ContainerDied","Data":"497a6fd4311944741b3abfbe1524cc191c26d5a8aa7d3846c2affa5223ec26e2"} Feb 19 10:11:12 crc kubenswrapper[4811]: I0219 10:11:12.421606 4811 patch_prober.go:28] interesting pod/router-default-5444994796-cmg4v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 10:11:12 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Feb 19 10:11:12 crc kubenswrapper[4811]: [+]process-running ok Feb 19 10:11:12 crc kubenswrapper[4811]: healthz check failed Feb 19 10:11:12 crc kubenswrapper[4811]: I0219 10:11:12.421681 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cmg4v" podUID="30ea8030-3330-42dc-bc17-00a8058c7b04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 10:11:13 crc kubenswrapper[4811]: I0219 10:11:13.433153 4811 patch_prober.go:28] interesting pod/router-default-5444994796-cmg4v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 10:11:13 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Feb 19 10:11:13 crc kubenswrapper[4811]: [+]process-running ok Feb 19 10:11:13 crc kubenswrapper[4811]: healthz check failed Feb 19 10:11:13 crc kubenswrapper[4811]: I0219 10:11:13.433272 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cmg4v" podUID="30ea8030-3330-42dc-bc17-00a8058c7b04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 10:11:14 crc kubenswrapper[4811]: I0219 10:11:14.423172 4811 patch_prober.go:28] interesting pod/router-default-5444994796-cmg4v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 10:11:14 crc kubenswrapper[4811]: [-]has-synced failed: reason withheld Feb 19 10:11:14 crc kubenswrapper[4811]: [+]process-running ok Feb 19 10:11:14 crc kubenswrapper[4811]: healthz check failed Feb 19 10:11:14 crc kubenswrapper[4811]: I0219 10:11:14.424286 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cmg4v" podUID="30ea8030-3330-42dc-bc17-00a8058c7b04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 10:11:15 crc kubenswrapper[4811]: I0219 10:11:15.422380 4811 patch_prober.go:28] interesting pod/router-default-5444994796-cmg4v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 10:11:15 crc kubenswrapper[4811]: [+]has-synced ok Feb 19 10:11:15 crc kubenswrapper[4811]: [+]process-running ok Feb 19 10:11:15 crc kubenswrapper[4811]: healthz check failed Feb 19 10:11:15 crc kubenswrapper[4811]: I0219 10:11:15.423025 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cmg4v" podUID="30ea8030-3330-42dc-bc17-00a8058c7b04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 10:11:16 crc kubenswrapper[4811]: I0219 10:11:16.421794 4811 patch_prober.go:28] interesting pod/router-default-5444994796-cmg4v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 10:11:16 crc kubenswrapper[4811]: [+]has-synced ok Feb 19 10:11:16 crc kubenswrapper[4811]: [+]process-running ok Feb 19 10:11:16 crc kubenswrapper[4811]: healthz check failed Feb 19 10:11:16 crc kubenswrapper[4811]: I0219 10:11:16.421882 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cmg4v" podUID="30ea8030-3330-42dc-bc17-00a8058c7b04" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 10:11:17 crc kubenswrapper[4811]: I0219 10:11:17.423271 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-cmg4v" Feb 19 10:11:17 crc kubenswrapper[4811]: I0219 10:11:17.426433 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-cmg4v" Feb 19 10:11:18 crc kubenswrapper[4811]: I0219 10:11:18.886643 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35d4fd8e-224d-468e-b6e2-a292a6e1949f-metrics-certs\") pod \"network-metrics-daemon-n5z6r\" (UID: \"35d4fd8e-224d-468e-b6e2-a292a6e1949f\") " pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:11:18 crc kubenswrapper[4811]: I0219 10:11:18.921288 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35d4fd8e-224d-468e-b6e2-a292a6e1949f-metrics-certs\") pod \"network-metrics-daemon-n5z6r\" (UID: \"35d4fd8e-224d-468e-b6e2-a292a6e1949f\") " pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:11:19 crc kubenswrapper[4811]: I0219 10:11:19.117937 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n5z6r" Feb 19 10:11:19 crc kubenswrapper[4811]: I0219 10:11:19.551225 4811 patch_prober.go:28] interesting pod/downloads-7954f5f757-t44xg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 19 10:11:19 crc kubenswrapper[4811]: I0219 10:11:19.551241 4811 patch_prober.go:28] interesting pod/downloads-7954f5f757-t44xg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 19 10:11:19 crc kubenswrapper[4811]: I0219 10:11:19.551311 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t44xg" podUID="47f24fcb-a536-4173-b099-9fadf4be99c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 19 10:11:19 crc kubenswrapper[4811]: I0219 10:11:19.551349 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-t44xg" podUID="47f24fcb-a536-4173-b099-9fadf4be99c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 19 10:11:19 crc kubenswrapper[4811]: I0219 10:11:19.551423 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-t44xg" Feb 19 10:11:19 crc kubenswrapper[4811]: I0219 10:11:19.552069 4811 patch_prober.go:28] interesting pod/downloads-7954f5f757-t44xg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 19 10:11:19 crc kubenswrapper[4811]: I0219 10:11:19.552106 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t44xg" podUID="47f24fcb-a536-4173-b099-9fadf4be99c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 19 10:11:19 crc kubenswrapper[4811]: I0219 10:11:19.552644 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"f49fefb562829c2225cf21672edab892f9f5e5ff8c83ad804b60fa46151418da"} pod="openshift-console/downloads-7954f5f757-t44xg" containerMessage="Container download-server failed liveness probe, will be restarted" Feb 19 10:11:19 crc kubenswrapper[4811]: I0219 10:11:19.553853 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-t44xg" podUID="47f24fcb-a536-4173-b099-9fadf4be99c6" containerName="download-server" containerID="cri-o://f49fefb562829c2225cf21672edab892f9f5e5ff8c83ad804b60fa46151418da" gracePeriod=2 Feb 19 10:11:19 crc kubenswrapper[4811]: I0219 10:11:19.559059 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-dh45w" Feb 19 10:11:19 crc kubenswrapper[4811]: I0219 10:11:19.565306 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-dh45w" Feb 19 10:11:20 crc kubenswrapper[4811]: I0219 10:11:20.385752 4811 generic.go:334] "Generic (PLEG): container finished" podID="47f24fcb-a536-4173-b099-9fadf4be99c6" containerID="f49fefb562829c2225cf21672edab892f9f5e5ff8c83ad804b60fa46151418da" exitCode=0 Feb 19 10:11:20 crc kubenswrapper[4811]: I0219 10:11:20.386756 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-t44xg" event={"ID":"47f24fcb-a536-4173-b099-9fadf4be99c6","Type":"ContainerDied","Data":"f49fefb562829c2225cf21672edab892f9f5e5ff8c83ad804b60fa46151418da"} Feb 19 10:11:24 crc kubenswrapper[4811]: I0219 10:11:24.567124 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:11:25 crc kubenswrapper[4811]: I0219 10:11:25.862418 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 10:11:25 crc kubenswrapper[4811]: I0219 10:11:25.929185 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af5c10ef-4820-4bac-9788-a547bf06fc20-kube-api-access\") pod \"af5c10ef-4820-4bac-9788-a547bf06fc20\" (UID: \"af5c10ef-4820-4bac-9788-a547bf06fc20\") " Feb 19 10:11:25 crc kubenswrapper[4811]: I0219 10:11:25.929314 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af5c10ef-4820-4bac-9788-a547bf06fc20-kubelet-dir\") pod \"af5c10ef-4820-4bac-9788-a547bf06fc20\" (UID: \"af5c10ef-4820-4bac-9788-a547bf06fc20\") " Feb 19 10:11:25 crc kubenswrapper[4811]: I0219 10:11:25.929511 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af5c10ef-4820-4bac-9788-a547bf06fc20-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "af5c10ef-4820-4bac-9788-a547bf06fc20" (UID: "af5c10ef-4820-4bac-9788-a547bf06fc20"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:11:25 crc kubenswrapper[4811]: I0219 10:11:25.939698 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af5c10ef-4820-4bac-9788-a547bf06fc20-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "af5c10ef-4820-4bac-9788-a547bf06fc20" (UID: "af5c10ef-4820-4bac-9788-a547bf06fc20"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:11:26 crc kubenswrapper[4811]: I0219 10:11:26.031666 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af5c10ef-4820-4bac-9788-a547bf06fc20-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 10:11:26 crc kubenswrapper[4811]: I0219 10:11:26.031736 4811 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af5c10ef-4820-4bac-9788-a547bf06fc20-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 10:11:26 crc kubenswrapper[4811]: I0219 10:11:26.284878 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-n5z6r"] Feb 19 10:11:26 crc kubenswrapper[4811]: I0219 10:11:26.442142 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"af5c10ef-4820-4bac-9788-a547bf06fc20","Type":"ContainerDied","Data":"c355f5badfdfc3a08fbc22c22c641d4597c08e98a9f7a87e18a552a7dd42b241"} Feb 19 10:11:26 crc kubenswrapper[4811]: I0219 10:11:26.442200 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c355f5badfdfc3a08fbc22c22c641d4597c08e98a9f7a87e18a552a7dd42b241" Feb 19 10:11:26 crc kubenswrapper[4811]: I0219 10:11:26.442226 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 10:11:26 crc kubenswrapper[4811]: I0219 10:11:26.788136 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:11:26 crc kubenswrapper[4811]: I0219 10:11:26.788252 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:11:29 crc kubenswrapper[4811]: I0219 10:11:29.550539 4811 patch_prober.go:28] interesting pod/downloads-7954f5f757-t44xg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 19 10:11:29 crc kubenswrapper[4811]: I0219 10:11:29.551004 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t44xg" podUID="47f24fcb-a536-4173-b099-9fadf4be99c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 19 10:11:31 crc kubenswrapper[4811]: I0219 10:11:31.124177 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-569hh" Feb 19 10:11:39 crc kubenswrapper[4811]: I0219 10:11:39.551167 4811 patch_prober.go:28] interesting pod/downloads-7954f5f757-t44xg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 19 10:11:39 crc kubenswrapper[4811]: I0219 10:11:39.552071 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t44xg" podUID="47f24fcb-a536-4173-b099-9fadf4be99c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 19 10:11:43 crc kubenswrapper[4811]: I0219 10:11:43.227414 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 10:11:43 crc kubenswrapper[4811]: E0219 10:11:43.227756 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af5c10ef-4820-4bac-9788-a547bf06fc20" containerName="pruner" Feb 19 10:11:43 crc kubenswrapper[4811]: I0219 10:11:43.227777 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="af5c10ef-4820-4bac-9788-a547bf06fc20" containerName="pruner" Feb 19 10:11:43 crc kubenswrapper[4811]: I0219 10:11:43.227935 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="af5c10ef-4820-4bac-9788-a547bf06fc20" containerName="pruner" Feb 19 10:11:43 crc kubenswrapper[4811]: I0219 10:11:43.228481 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 10:11:43 crc kubenswrapper[4811]: I0219 10:11:43.231256 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 10:11:43 crc kubenswrapper[4811]: I0219 10:11:43.233406 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 10:11:43 crc kubenswrapper[4811]: I0219 10:11:43.234327 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 10:11:43 crc kubenswrapper[4811]: I0219 10:11:43.396168 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02406e89-671a-469f-bddc-f9a138607bb3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"02406e89-671a-469f-bddc-f9a138607bb3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 10:11:43 crc kubenswrapper[4811]: I0219 10:11:43.396671 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/02406e89-671a-469f-bddc-f9a138607bb3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"02406e89-671a-469f-bddc-f9a138607bb3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 10:11:43 crc kubenswrapper[4811]: I0219 10:11:43.497800 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02406e89-671a-469f-bddc-f9a138607bb3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"02406e89-671a-469f-bddc-f9a138607bb3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 10:11:43 crc kubenswrapper[4811]: I0219 10:11:43.497938 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/02406e89-671a-469f-bddc-f9a138607bb3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"02406e89-671a-469f-bddc-f9a138607bb3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 10:11:43 crc kubenswrapper[4811]: I0219 10:11:43.498017 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/02406e89-671a-469f-bddc-f9a138607bb3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"02406e89-671a-469f-bddc-f9a138607bb3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 10:11:43 crc kubenswrapper[4811]: I0219 10:11:43.527520 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02406e89-671a-469f-bddc-f9a138607bb3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"02406e89-671a-469f-bddc-f9a138607bb3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 10:11:43 crc kubenswrapper[4811]: I0219 10:11:43.618581 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 10:11:44 crc kubenswrapper[4811]: I0219 10:11:44.021072 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 10:11:47 crc kubenswrapper[4811]: I0219 10:11:47.621516 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 10:11:47 crc kubenswrapper[4811]: I0219 10:11:47.622619 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 10:11:47 crc kubenswrapper[4811]: I0219 10:11:47.643576 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 10:11:47 crc kubenswrapper[4811]: I0219 10:11:47.763872 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7e395259-92d6-4fa4-8eb1-da9246b3626b-var-lock\") pod \"installer-9-crc\" (UID: \"7e395259-92d6-4fa4-8eb1-da9246b3626b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 10:11:47 crc kubenswrapper[4811]: I0219 10:11:47.763955 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e395259-92d6-4fa4-8eb1-da9246b3626b-kube-api-access\") pod \"installer-9-crc\" (UID: \"7e395259-92d6-4fa4-8eb1-da9246b3626b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 10:11:47 crc kubenswrapper[4811]: I0219 10:11:47.764058 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e395259-92d6-4fa4-8eb1-da9246b3626b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7e395259-92d6-4fa4-8eb1-da9246b3626b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 10:11:47 crc kubenswrapper[4811]: I0219 10:11:47.865513 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7e395259-92d6-4fa4-8eb1-da9246b3626b-var-lock\") pod \"installer-9-crc\" (UID: \"7e395259-92d6-4fa4-8eb1-da9246b3626b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 10:11:47 crc kubenswrapper[4811]: I0219 10:11:47.865569 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e395259-92d6-4fa4-8eb1-da9246b3626b-kube-api-access\") pod \"installer-9-crc\" (UID: \"7e395259-92d6-4fa4-8eb1-da9246b3626b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 10:11:47 crc kubenswrapper[4811]: I0219 10:11:47.865612 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e395259-92d6-4fa4-8eb1-da9246b3626b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7e395259-92d6-4fa4-8eb1-da9246b3626b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 10:11:47 crc kubenswrapper[4811]: I0219 10:11:47.865690 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e395259-92d6-4fa4-8eb1-da9246b3626b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7e395259-92d6-4fa4-8eb1-da9246b3626b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 10:11:47 crc kubenswrapper[4811]: I0219 10:11:47.865716 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7e395259-92d6-4fa4-8eb1-da9246b3626b-var-lock\") pod \"installer-9-crc\" (UID: \"7e395259-92d6-4fa4-8eb1-da9246b3626b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 10:11:47 crc kubenswrapper[4811]: I0219 10:11:47.886982 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e395259-92d6-4fa4-8eb1-da9246b3626b-kube-api-access\") pod \"installer-9-crc\" (UID: \"7e395259-92d6-4fa4-8eb1-da9246b3626b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 10:11:47 crc kubenswrapper[4811]: I0219 10:11:47.944894 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 10:11:49 crc kubenswrapper[4811]: I0219 10:11:49.552904 4811 patch_prober.go:28] interesting pod/downloads-7954f5f757-t44xg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 19 10:11:49 crc kubenswrapper[4811]: I0219 10:11:49.553340 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t44xg" podUID="47f24fcb-a536-4173-b099-9fadf4be99c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 19 10:11:49 crc kubenswrapper[4811]: I0219 10:11:49.591738 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-n5z6r" event={"ID":"35d4fd8e-224d-468e-b6e2-a292a6e1949f","Type":"ContainerStarted","Data":"c35553bab9c7b746a3a20ad7b0be0b9934546498375a5727c1504080f467cd72"} Feb 19 10:11:50 crc kubenswrapper[4811]: E0219 10:11:50.625804 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 19 10:11:50 crc kubenswrapper[4811]: E0219 10:11:50.626075 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-krrpn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-zq68z_openshift-marketplace(f528e872-8fd6-4cfc-b4d5-129433fed8f6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 10:11:50 crc kubenswrapper[4811]: E0219 10:11:50.627275 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-zq68z" podUID="f528e872-8fd6-4cfc-b4d5-129433fed8f6" Feb 19 10:11:51 crc kubenswrapper[4811]: E0219 10:11:51.167596 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 19 10:11:51 crc kubenswrapper[4811]: E0219 10:11:51.167809 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vmb2k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-5kfzw_openshift-marketplace(fc1f2b4f-13cd-486c-a09a-14c17b7e2c86): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 10:11:51 crc kubenswrapper[4811]: E0219 10:11:51.169239 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-5kfzw" podUID="fc1f2b4f-13cd-486c-a09a-14c17b7e2c86" Feb 19 10:11:55 crc kubenswrapper[4811]: E0219 10:11:55.994749 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zq68z" podUID="f528e872-8fd6-4cfc-b4d5-129433fed8f6" Feb 19 10:11:55 crc kubenswrapper[4811]: E0219 10:11:55.994887 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-5kfzw" podUID="fc1f2b4f-13cd-486c-a09a-14c17b7e2c86" Feb 19 10:11:56 crc kubenswrapper[4811]: E0219 10:11:56.093910 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 19 10:11:56 crc kubenswrapper[4811]: E0219 10:11:56.094661 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9slxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-pctn2_openshift-marketplace(006d82c1-543d-45da-8e4b-75a557df7959): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 10:11:56 crc kubenswrapper[4811]: E0219 10:11:56.096091 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-pctn2" podUID="006d82c1-543d-45da-8e4b-75a557df7959" Feb 19 10:11:56 crc kubenswrapper[4811]: E0219 10:11:56.123955 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 19 10:11:56 crc kubenswrapper[4811]: E0219 10:11:56.124412 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qm4rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-ksjbn_openshift-marketplace(ca27a1be-f8a5-447f-b61f-da206a041290): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 10:11:56 crc kubenswrapper[4811]: E0219 10:11:56.125704 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-ksjbn" podUID="ca27a1be-f8a5-447f-b61f-da206a041290" Feb 19 10:11:56 crc kubenswrapper[4811]: I0219 10:11:56.788524 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:11:56 crc kubenswrapper[4811]: I0219 10:11:56.789152 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:11:56 crc kubenswrapper[4811]: I0219 10:11:56.789223 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" Feb 19 10:11:56 crc kubenswrapper[4811]: I0219 10:11:56.791185 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397"} pod="openshift-machine-config-operator/machine-config-daemon-v97zm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:11:56 crc kubenswrapper[4811]: I0219 10:11:56.791269 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" containerID="cri-o://85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397" gracePeriod=600 Feb 19 10:11:57 crc kubenswrapper[4811]: I0219 10:11:57.646291 4811 generic.go:334] "Generic (PLEG): container finished" podID="e2e42c80-bece-4066-abad-05bc661d33f5" containerID="85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397" exitCode=0 Feb 19 10:11:57 crc kubenswrapper[4811]: I0219 10:11:57.646363 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" event={"ID":"e2e42c80-bece-4066-abad-05bc661d33f5","Type":"ContainerDied","Data":"85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397"} Feb 19 10:11:58 crc kubenswrapper[4811]: E0219 10:11:58.019610 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-ksjbn" podUID="ca27a1be-f8a5-447f-b61f-da206a041290" Feb 19 10:11:58 crc kubenswrapper[4811]: E0219 10:11:58.020444 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-pctn2" podUID="006d82c1-543d-45da-8e4b-75a557df7959" Feb 19 10:11:58 crc kubenswrapper[4811]: E0219 10:11:58.483359 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 19 10:11:58 crc kubenswrapper[4811]: E0219 10:11:58.483609 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7wf6f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-cpcsw_openshift-marketplace(c2f288dc-ac27-4a76-84da-d3f4e2dccf77): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 10:11:58 crc kubenswrapper[4811]: E0219 10:11:58.484803 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-cpcsw" podUID="c2f288dc-ac27-4a76-84da-d3f4e2dccf77" Feb 19 10:11:59 crc kubenswrapper[4811]: I0219 10:11:59.551398 4811 patch_prober.go:28] interesting pod/downloads-7954f5f757-t44xg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 19 10:11:59 crc kubenswrapper[4811]: I0219 10:11:59.551526 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t44xg" podUID="47f24fcb-a536-4173-b099-9fadf4be99c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 19 10:11:59 crc kubenswrapper[4811]: E0219 10:11:59.911611 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-cpcsw" podUID="c2f288dc-ac27-4a76-84da-d3f4e2dccf77" Feb 19 10:12:00 crc kubenswrapper[4811]: E0219 10:12:00.212162 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 19 10:12:00 crc kubenswrapper[4811]: E0219 10:12:00.213055 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-79jzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-79ql7_openshift-marketplace(bad2100b-8b05-432c-96b8-19dbc3c5471c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 10:12:00 crc kubenswrapper[4811]: E0219 10:12:00.214276 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-79ql7" podUID="bad2100b-8b05-432c-96b8-19dbc3c5471c" Feb 19 10:12:00 crc kubenswrapper[4811]: E0219 10:12:00.248595 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 19 10:12:00 crc kubenswrapper[4811]: E0219 10:12:00.248809 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6vmvs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-q7qnh_openshift-marketplace(0f0825a4-4e84-49d7-929e-165ced3ed2f8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 10:12:00 crc kubenswrapper[4811]: E0219 10:12:00.250347 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-q7qnh" podUID="0f0825a4-4e84-49d7-929e-165ced3ed2f8" Feb 19 10:12:00 crc kubenswrapper[4811]: I0219 10:12:00.430063 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 10:12:00 crc kubenswrapper[4811]: E0219 10:12:00.451124 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 19 10:12:00 crc kubenswrapper[4811]: E0219 10:12:00.451303 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h87cn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-k4gqt_openshift-marketplace(c4c2ec9c-52c4-442e-adf8-eddb1d74d250): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 10:12:00 crc kubenswrapper[4811]: E0219 10:12:00.452435 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-k4gqt" podUID="c4c2ec9c-52c4-442e-adf8-eddb1d74d250" Feb 19 10:12:00 crc kubenswrapper[4811]: I0219 10:12:00.487527 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 10:12:00 crc kubenswrapper[4811]: W0219 10:12:00.499671 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod02406e89_671a_469f_bddc_f9a138607bb3.slice/crio-456120908a6bbbb184cd46c6f356dff9b1a756c051d4c4d8e6d38812e4e09b79 WatchSource:0}: Error finding container 456120908a6bbbb184cd46c6f356dff9b1a756c051d4c4d8e6d38812e4e09b79: Status 404 returned error can't find the container with id 456120908a6bbbb184cd46c6f356dff9b1a756c051d4c4d8e6d38812e4e09b79 Feb 19 10:12:00 crc kubenswrapper[4811]: I0219 10:12:00.669200 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"02406e89-671a-469f-bddc-f9a138607bb3","Type":"ContainerStarted","Data":"456120908a6bbbb184cd46c6f356dff9b1a756c051d4c4d8e6d38812e4e09b79"} Feb 19 10:12:00 crc kubenswrapper[4811]: I0219 10:12:00.675521 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-n5z6r" event={"ID":"35d4fd8e-224d-468e-b6e2-a292a6e1949f","Type":"ContainerStarted","Data":"cae2f3f086d5fb9bbb448d5706652e679d47cc83e1ebc0b445b97db7bf34cd3d"} Feb 19 10:12:00 crc kubenswrapper[4811]: I0219 10:12:00.686220 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-t44xg" event={"ID":"47f24fcb-a536-4173-b099-9fadf4be99c6","Type":"ContainerStarted","Data":"5e38d53c8afcc6ffa260477cfe49c151dd5a8eef5a303f664d7c95502ba8ca96"} Feb 19 10:12:00 crc kubenswrapper[4811]: I0219 10:12:00.687896 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-t44xg" Feb 19 10:12:00 crc kubenswrapper[4811]: I0219 10:12:00.689512 4811 patch_prober.go:28] interesting pod/downloads-7954f5f757-t44xg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 19 10:12:00 crc kubenswrapper[4811]: I0219 10:12:00.689583 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t44xg" podUID="47f24fcb-a536-4173-b099-9fadf4be99c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 19 10:12:00 crc kubenswrapper[4811]: I0219 10:12:00.691585 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7e395259-92d6-4fa4-8eb1-da9246b3626b","Type":"ContainerStarted","Data":"da0dd4bd2ecee080f9cf9ec62ac443fe2a970c63fa96b4576644122d8f4fe6f2"} Feb 19 10:12:00 crc kubenswrapper[4811]: I0219 10:12:00.698219 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" event={"ID":"e2e42c80-bece-4066-abad-05bc661d33f5","Type":"ContainerStarted","Data":"f5ee7e0c8452ed8d16d6bb0b1911f3bd3e4bc5692e9f2491a7bdfabe3ba59054"} Feb 19 10:12:00 crc kubenswrapper[4811]: E0219 10:12:00.700023 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-k4gqt" podUID="c4c2ec9c-52c4-442e-adf8-eddb1d74d250" Feb 19 10:12:00 crc kubenswrapper[4811]: E0219 10:12:00.700507 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-79ql7" podUID="bad2100b-8b05-432c-96b8-19dbc3c5471c" Feb 19 10:12:00 crc kubenswrapper[4811]: E0219 10:12:00.700889 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-q7qnh" podUID="0f0825a4-4e84-49d7-929e-165ced3ed2f8" Feb 19 10:12:01 crc kubenswrapper[4811]: I0219 10:12:01.707510 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"02406e89-671a-469f-bddc-f9a138607bb3","Type":"ContainerStarted","Data":"37fec1bc979f4a36133ed20f54d7d3c7d9aa3c0640332835c974b7d0c19813b8"} Feb 19 10:12:01 crc kubenswrapper[4811]: I0219 10:12:01.712815 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-n5z6r" event={"ID":"35d4fd8e-224d-468e-b6e2-a292a6e1949f","Type":"ContainerStarted","Data":"4f8749c7c92329a0d825a44e05b62dd99d0004a6cf11f6a414815cbe59b54a7f"} Feb 19 10:12:01 crc kubenswrapper[4811]: I0219 10:12:01.717403 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7e395259-92d6-4fa4-8eb1-da9246b3626b","Type":"ContainerStarted","Data":"75550b433f5513e6b08d573fe5e471ddbbc9ff9e6bceb340111d036c831bc4bf"} Feb 19 10:12:01 crc kubenswrapper[4811]: I0219 10:12:01.717885 4811 patch_prober.go:28] interesting pod/downloads-7954f5f757-t44xg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 19 10:12:01 crc kubenswrapper[4811]: I0219 10:12:01.717929 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t44xg" podUID="47f24fcb-a536-4173-b099-9fadf4be99c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 19 10:12:01 crc kubenswrapper[4811]: I0219 10:12:01.741495 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=18.741420446 podStartE2EDuration="18.741420446s" podCreationTimestamp="2026-02-19 10:11:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:12:01.739363807 +0000 UTC m=+210.265828513" watchObservedRunningTime="2026-02-19 10:12:01.741420446 +0000 UTC m=+210.267885132" Feb 19 10:12:01 crc kubenswrapper[4811]: I0219 10:12:01.759409 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-n5z6r" podStartSLOduration=186.759386102 podStartE2EDuration="3m6.759386102s" podCreationTimestamp="2026-02-19 10:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:12:01.758145083 +0000 UTC m=+210.284609779" watchObservedRunningTime="2026-02-19 10:12:01.759386102 +0000 UTC m=+210.285850788" Feb 19 10:12:01 crc kubenswrapper[4811]: I0219 10:12:01.780082 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=14.780054243 podStartE2EDuration="14.780054243s" podCreationTimestamp="2026-02-19 10:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:12:01.777291677 +0000 UTC m=+210.303756373" watchObservedRunningTime="2026-02-19 10:12:01.780054243 +0000 UTC m=+210.306518929" Feb 19 10:12:02 crc kubenswrapper[4811]: I0219 10:12:02.724568 4811 generic.go:334] "Generic (PLEG): container finished" podID="02406e89-671a-469f-bddc-f9a138607bb3" containerID="37fec1bc979f4a36133ed20f54d7d3c7d9aa3c0640332835c974b7d0c19813b8" exitCode=0 Feb 19 10:12:02 crc kubenswrapper[4811]: I0219 10:12:02.724720 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"02406e89-671a-469f-bddc-f9a138607bb3","Type":"ContainerDied","Data":"37fec1bc979f4a36133ed20f54d7d3c7d9aa3c0640332835c974b7d0c19813b8"} Feb 19 10:12:02 crc kubenswrapper[4811]: I0219 10:12:02.727644 4811 patch_prober.go:28] interesting pod/downloads-7954f5f757-t44xg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 19 10:12:02 crc kubenswrapper[4811]: I0219 10:12:02.727722 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t44xg" podUID="47f24fcb-a536-4173-b099-9fadf4be99c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 19 10:12:04 crc kubenswrapper[4811]: I0219 10:12:04.010272 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 10:12:04 crc kubenswrapper[4811]: I0219 10:12:04.119305 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02406e89-671a-469f-bddc-f9a138607bb3-kube-api-access\") pod \"02406e89-671a-469f-bddc-f9a138607bb3\" (UID: \"02406e89-671a-469f-bddc-f9a138607bb3\") " Feb 19 10:12:04 crc kubenswrapper[4811]: I0219 10:12:04.120173 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/02406e89-671a-469f-bddc-f9a138607bb3-kubelet-dir\") pod \"02406e89-671a-469f-bddc-f9a138607bb3\" (UID: \"02406e89-671a-469f-bddc-f9a138607bb3\") " Feb 19 10:12:04 crc kubenswrapper[4811]: I0219 10:12:04.120299 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02406e89-671a-469f-bddc-f9a138607bb3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "02406e89-671a-469f-bddc-f9a138607bb3" (UID: "02406e89-671a-469f-bddc-f9a138607bb3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:12:04 crc kubenswrapper[4811]: I0219 10:12:04.120734 4811 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/02406e89-671a-469f-bddc-f9a138607bb3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:04 crc kubenswrapper[4811]: I0219 10:12:04.126831 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02406e89-671a-469f-bddc-f9a138607bb3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "02406e89-671a-469f-bddc-f9a138607bb3" (UID: "02406e89-671a-469f-bddc-f9a138607bb3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:12:04 crc kubenswrapper[4811]: I0219 10:12:04.223273 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02406e89-671a-469f-bddc-f9a138607bb3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:04 crc kubenswrapper[4811]: I0219 10:12:04.741784 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"02406e89-671a-469f-bddc-f9a138607bb3","Type":"ContainerDied","Data":"456120908a6bbbb184cd46c6f356dff9b1a756c051d4c4d8e6d38812e4e09b79"} Feb 19 10:12:04 crc kubenswrapper[4811]: I0219 10:12:04.741842 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="456120908a6bbbb184cd46c6f356dff9b1a756c051d4c4d8e6d38812e4e09b79" Feb 19 10:12:04 crc kubenswrapper[4811]: I0219 10:12:04.741883 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 10:12:09 crc kubenswrapper[4811]: I0219 10:12:09.187725 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8wjls"] Feb 19 10:12:09 crc kubenswrapper[4811]: I0219 10:12:09.550434 4811 patch_prober.go:28] interesting pod/downloads-7954f5f757-t44xg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 19 10:12:09 crc kubenswrapper[4811]: I0219 10:12:09.550504 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-t44xg" podUID="47f24fcb-a536-4173-b099-9fadf4be99c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 19 10:12:09 crc kubenswrapper[4811]: I0219 10:12:09.550434 4811 patch_prober.go:28] interesting pod/downloads-7954f5f757-t44xg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 19 10:12:09 crc kubenswrapper[4811]: I0219 10:12:09.551791 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t44xg" podUID="47f24fcb-a536-4173-b099-9fadf4be99c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 19 10:12:10 crc kubenswrapper[4811]: I0219 10:12:10.776266 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5kfzw" event={"ID":"fc1f2b4f-13cd-486c-a09a-14c17b7e2c86","Type":"ContainerStarted","Data":"46ef71d764192f2f5463681393644b3aa23acd152468ef438f849a8eba671d54"} Feb 19 10:12:11 crc kubenswrapper[4811]: I0219 10:12:11.782123 4811 generic.go:334] "Generic (PLEG): container finished" podID="fc1f2b4f-13cd-486c-a09a-14c17b7e2c86" containerID="46ef71d764192f2f5463681393644b3aa23acd152468ef438f849a8eba671d54" exitCode=0 Feb 19 10:12:11 crc kubenswrapper[4811]: I0219 10:12:11.782519 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5kfzw" event={"ID":"fc1f2b4f-13cd-486c-a09a-14c17b7e2c86","Type":"ContainerDied","Data":"46ef71d764192f2f5463681393644b3aa23acd152468ef438f849a8eba671d54"} Feb 19 10:12:19 crc kubenswrapper[4811]: I0219 10:12:19.565717 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-t44xg" Feb 19 10:12:19 crc kubenswrapper[4811]: I0219 10:12:19.850256 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7qnh" event={"ID":"0f0825a4-4e84-49d7-929e-165ced3ed2f8","Type":"ContainerStarted","Data":"9bb22657da7ef57230647db310adcc359a57da229293055d5bc81c198ac4427e"} Feb 19 10:12:19 crc kubenswrapper[4811]: I0219 10:12:19.863070 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cpcsw" event={"ID":"c2f288dc-ac27-4a76-84da-d3f4e2dccf77","Type":"ContainerStarted","Data":"65dec841c08ba888fe305c3a8d7a9464fc30a62b702aca301f6fa65b7d5c9eeb"} Feb 19 10:12:19 crc kubenswrapper[4811]: I0219 10:12:19.884393 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k4gqt" event={"ID":"c4c2ec9c-52c4-442e-adf8-eddb1d74d250","Type":"ContainerStarted","Data":"9968c953ef73f432478604b861bd1118775f858ce63c33236d4a80f4b30593e0"} Feb 19 10:12:20 crc kubenswrapper[4811]: I0219 10:12:20.894219 4811 generic.go:334] "Generic (PLEG): container finished" podID="c2f288dc-ac27-4a76-84da-d3f4e2dccf77" containerID="65dec841c08ba888fe305c3a8d7a9464fc30a62b702aca301f6fa65b7d5c9eeb" exitCode=0 Feb 19 10:12:20 crc kubenswrapper[4811]: I0219 10:12:20.894333 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cpcsw" event={"ID":"c2f288dc-ac27-4a76-84da-d3f4e2dccf77","Type":"ContainerDied","Data":"65dec841c08ba888fe305c3a8d7a9464fc30a62b702aca301f6fa65b7d5c9eeb"} Feb 19 10:12:20 crc kubenswrapper[4811]: I0219 10:12:20.905770 4811 generic.go:334] "Generic (PLEG): container finished" podID="bad2100b-8b05-432c-96b8-19dbc3c5471c" containerID="f32c4d2722e2fb8420ad930793cc6d8c35d46b4ebc02c1f596085ef118349e58" exitCode=0 Feb 19 10:12:20 crc kubenswrapper[4811]: I0219 10:12:20.905875 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-79ql7" event={"ID":"bad2100b-8b05-432c-96b8-19dbc3c5471c","Type":"ContainerDied","Data":"f32c4d2722e2fb8420ad930793cc6d8c35d46b4ebc02c1f596085ef118349e58"} Feb 19 10:12:20 crc kubenswrapper[4811]: I0219 10:12:20.907967 4811 generic.go:334] "Generic (PLEG): container finished" podID="c4c2ec9c-52c4-442e-adf8-eddb1d74d250" containerID="9968c953ef73f432478604b861bd1118775f858ce63c33236d4a80f4b30593e0" exitCode=0 Feb 19 10:12:20 crc kubenswrapper[4811]: I0219 10:12:20.908079 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k4gqt" event={"ID":"c4c2ec9c-52c4-442e-adf8-eddb1d74d250","Type":"ContainerDied","Data":"9968c953ef73f432478604b861bd1118775f858ce63c33236d4a80f4b30593e0"} Feb 19 10:12:20 crc kubenswrapper[4811]: I0219 10:12:20.912796 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5kfzw" event={"ID":"fc1f2b4f-13cd-486c-a09a-14c17b7e2c86","Type":"ContainerStarted","Data":"09b9d7e23b5d5875e0568c9387b3c34b76415f3d3dd67523d3e1645f4b2595a3"} Feb 19 10:12:20 crc kubenswrapper[4811]: I0219 10:12:20.924626 4811 generic.go:334] "Generic (PLEG): container finished" podID="0f0825a4-4e84-49d7-929e-165ced3ed2f8" containerID="9bb22657da7ef57230647db310adcc359a57da229293055d5bc81c198ac4427e" exitCode=0 Feb 19 10:12:20 crc kubenswrapper[4811]: I0219 10:12:20.924891 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7qnh" event={"ID":"0f0825a4-4e84-49d7-929e-165ced3ed2f8","Type":"ContainerDied","Data":"9bb22657da7ef57230647db310adcc359a57da229293055d5bc81c198ac4427e"} Feb 19 10:12:20 crc kubenswrapper[4811]: I0219 10:12:20.933670 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zq68z" event={"ID":"f528e872-8fd6-4cfc-b4d5-129433fed8f6","Type":"ContainerStarted","Data":"c68c3044f2ceced560321a63f7b0726636ba096ba96b30f4c67d3e531d465757"} Feb 19 10:12:20 crc kubenswrapper[4811]: I0219 10:12:20.943144 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pctn2" event={"ID":"006d82c1-543d-45da-8e4b-75a557df7959","Type":"ContainerDied","Data":"c667cf10ca24075d80e90f6b7f7a8a98915541a064ec0591822211c7de6565f6"} Feb 19 10:12:20 crc kubenswrapper[4811]: I0219 10:12:20.943049 4811 generic.go:334] "Generic (PLEG): container finished" podID="006d82c1-543d-45da-8e4b-75a557df7959" containerID="c667cf10ca24075d80e90f6b7f7a8a98915541a064ec0591822211c7de6565f6" exitCode=0 Feb 19 10:12:20 crc kubenswrapper[4811]: I0219 10:12:20.951119 4811 generic.go:334] "Generic (PLEG): container finished" podID="ca27a1be-f8a5-447f-b61f-da206a041290" containerID="82b0866ab8a07990353fa81c9d1ee85674fe213ac9e67dece09a22de4039857b" exitCode=0 Feb 19 10:12:20 crc kubenswrapper[4811]: I0219 10:12:20.951188 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ksjbn" event={"ID":"ca27a1be-f8a5-447f-b61f-da206a041290","Type":"ContainerDied","Data":"82b0866ab8a07990353fa81c9d1ee85674fe213ac9e67dece09a22de4039857b"} Feb 19 10:12:21 crc kubenswrapper[4811]: I0219 10:12:21.959819 4811 generic.go:334] "Generic (PLEG): container finished" podID="f528e872-8fd6-4cfc-b4d5-129433fed8f6" containerID="c68c3044f2ceced560321a63f7b0726636ba096ba96b30f4c67d3e531d465757" exitCode=0 Feb 19 10:12:21 crc kubenswrapper[4811]: I0219 10:12:21.960803 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zq68z" event={"ID":"f528e872-8fd6-4cfc-b4d5-129433fed8f6","Type":"ContainerDied","Data":"c68c3044f2ceced560321a63f7b0726636ba096ba96b30f4c67d3e531d465757"} Feb 19 10:12:21 crc kubenswrapper[4811]: I0219 10:12:21.982958 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5kfzw" podStartSLOduration=6.013500363 podStartE2EDuration="1m23.982935939s" podCreationTimestamp="2026-02-19 10:10:58 +0000 UTC" firstStartedPulling="2026-02-19 10:11:00.480888696 +0000 UTC m=+149.007353382" lastFinishedPulling="2026-02-19 10:12:18.450324272 +0000 UTC m=+226.976788958" observedRunningTime="2026-02-19 10:12:21.981788792 +0000 UTC m=+230.508253478" watchObservedRunningTime="2026-02-19 10:12:21.982935939 +0000 UTC m=+230.509400625" Feb 19 10:12:23 crc kubenswrapper[4811]: I0219 10:12:23.977894 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cpcsw" event={"ID":"c2f288dc-ac27-4a76-84da-d3f4e2dccf77","Type":"ContainerStarted","Data":"df56ee74ba6c076976dd2fa78f864f9e2967e63a9160305688fb4d94f0a7b43d"} Feb 19 10:12:25 crc kubenswrapper[4811]: I0219 10:12:25.001628 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cpcsw" podStartSLOduration=7.102305587 podStartE2EDuration="1m30.001610783s" podCreationTimestamp="2026-02-19 10:10:55 +0000 UTC" firstStartedPulling="2026-02-19 10:11:00.445315222 +0000 UTC m=+148.971779908" lastFinishedPulling="2026-02-19 10:12:23.344620418 +0000 UTC m=+231.871085104" observedRunningTime="2026-02-19 10:12:24.999034222 +0000 UTC m=+233.525498908" watchObservedRunningTime="2026-02-19 10:12:25.001610783 +0000 UTC m=+233.528075469" Feb 19 10:12:25 crc kubenswrapper[4811]: I0219 10:12:25.693317 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cpcsw" Feb 19 10:12:25 crc kubenswrapper[4811]: I0219 10:12:25.693750 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cpcsw" Feb 19 10:12:25 crc kubenswrapper[4811]: I0219 10:12:25.991994 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ksjbn" event={"ID":"ca27a1be-f8a5-447f-b61f-da206a041290","Type":"ContainerStarted","Data":"852f99f412b6570baa046cb13b53118c87807dfb31e8424edf6e3346efbf488e"} Feb 19 10:12:26 crc kubenswrapper[4811]: I0219 10:12:26.011378 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ksjbn" podStartSLOduration=6.168109231 podStartE2EDuration="1m31.01135272s" podCreationTimestamp="2026-02-19 10:10:55 +0000 UTC" firstStartedPulling="2026-02-19 10:11:00.361260438 +0000 UTC m=+148.887725124" lastFinishedPulling="2026-02-19 10:12:25.204503927 +0000 UTC m=+233.730968613" observedRunningTime="2026-02-19 10:12:26.008077192 +0000 UTC m=+234.534541878" watchObservedRunningTime="2026-02-19 10:12:26.01135272 +0000 UTC m=+234.537817426" Feb 19 10:12:26 crc kubenswrapper[4811]: I0219 10:12:26.576804 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cpcsw" Feb 19 10:12:27 crc kubenswrapper[4811]: I0219 10:12:27.000983 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pctn2" event={"ID":"006d82c1-543d-45da-8e4b-75a557df7959","Type":"ContainerStarted","Data":"8d7454b18cc4d065ea13d4c7c90ea3d70cc3dbb16194c8948a66fb376e4aeded"} Feb 19 10:12:27 crc kubenswrapper[4811]: I0219 10:12:27.032133 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pctn2" podStartSLOduration=4.749685388 podStartE2EDuration="1m32.032101079s" podCreationTimestamp="2026-02-19 10:10:55 +0000 UTC" firstStartedPulling="2026-02-19 10:10:58.755980111 +0000 UTC m=+147.282444797" lastFinishedPulling="2026-02-19 10:12:26.038395812 +0000 UTC m=+234.564860488" observedRunningTime="2026-02-19 10:12:27.022937562 +0000 UTC m=+235.549402248" watchObservedRunningTime="2026-02-19 10:12:27.032101079 +0000 UTC m=+235.558565765" Feb 19 10:12:28 crc kubenswrapper[4811]: I0219 10:12:28.008295 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-79ql7" event={"ID":"bad2100b-8b05-432c-96b8-19dbc3c5471c","Type":"ContainerStarted","Data":"a4c139e06cbdffabf77d30164993cffa9be2e0c99dad125ce7a9b8e919418374"} Feb 19 10:12:28 crc kubenswrapper[4811]: I0219 10:12:28.590291 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5kfzw" Feb 19 10:12:28 crc kubenswrapper[4811]: I0219 10:12:28.590872 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5kfzw" Feb 19 10:12:29 crc kubenswrapper[4811]: I0219 10:12:29.032948 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-79ql7" podStartSLOduration=5.348062908 podStartE2EDuration="1m32.032928072s" podCreationTimestamp="2026-02-19 10:10:57 +0000 UTC" firstStartedPulling="2026-02-19 10:11:00.337401512 +0000 UTC m=+148.863866198" lastFinishedPulling="2026-02-19 10:12:27.022266666 +0000 UTC m=+235.548731362" observedRunningTime="2026-02-19 10:12:29.030153726 +0000 UTC m=+237.556618412" watchObservedRunningTime="2026-02-19 10:12:29.032928072 +0000 UTC m=+237.559392758" Feb 19 10:12:29 crc kubenswrapper[4811]: I0219 10:12:29.671334 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5kfzw" podUID="fc1f2b4f-13cd-486c-a09a-14c17b7e2c86" containerName="registry-server" probeResult="failure" output=< Feb 19 10:12:29 crc kubenswrapper[4811]: timeout: failed to connect service ":50051" within 1s Feb 19 10:12:29 crc kubenswrapper[4811]: > Feb 19 10:12:32 crc kubenswrapper[4811]: I0219 10:12:32.030780 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k4gqt" event={"ID":"c4c2ec9c-52c4-442e-adf8-eddb1d74d250","Type":"ContainerStarted","Data":"aa02292d3821189d1853af5810111e4fc0808ee963d2effbe4cd2ac181cbe891"} Feb 19 10:12:32 crc kubenswrapper[4811]: I0219 10:12:32.033280 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7qnh" event={"ID":"0f0825a4-4e84-49d7-929e-165ced3ed2f8","Type":"ContainerStarted","Data":"1bd85b47fc39ae553708649c45b306ef907582b08a10dc180248263343731b9f"} Feb 19 10:12:32 crc kubenswrapper[4811]: I0219 10:12:32.035579 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zq68z" event={"ID":"f528e872-8fd6-4cfc-b4d5-129433fed8f6","Type":"ContainerStarted","Data":"1dc338fcc9a40512662c9c8391a79b0376d704050253c58615d5515bb7dd4784"} Feb 19 10:12:32 crc kubenswrapper[4811]: I0219 10:12:32.053310 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k4gqt" podStartSLOduration=5.817546415 podStartE2EDuration="1m37.053281625s" podCreationTimestamp="2026-02-19 10:10:55 +0000 UTC" firstStartedPulling="2026-02-19 10:10:59.116602667 +0000 UTC m=+147.643067353" lastFinishedPulling="2026-02-19 10:12:30.352337877 +0000 UTC m=+238.878802563" observedRunningTime="2026-02-19 10:12:32.050321845 +0000 UTC m=+240.576786541" watchObservedRunningTime="2026-02-19 10:12:32.053281625 +0000 UTC m=+240.579746311" Feb 19 10:12:32 crc kubenswrapper[4811]: I0219 10:12:32.073643 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q7qnh" podStartSLOduration=4.762309281 podStartE2EDuration="1m35.073622968s" podCreationTimestamp="2026-02-19 10:10:57 +0000 UTC" firstStartedPulling="2026-02-19 10:11:00.567834819 +0000 UTC m=+149.094299505" lastFinishedPulling="2026-02-19 10:12:30.879148486 +0000 UTC m=+239.405613192" observedRunningTime="2026-02-19 10:12:32.072811929 +0000 UTC m=+240.599276615" watchObservedRunningTime="2026-02-19 10:12:32.073622968 +0000 UTC m=+240.600087644" Feb 19 10:12:32 crc kubenswrapper[4811]: I0219 10:12:32.098014 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zq68z" podStartSLOduration=3.370604877 podStartE2EDuration="1m34.097997026s" podCreationTimestamp="2026-02-19 10:10:58 +0000 UTC" firstStartedPulling="2026-02-19 10:11:00.319396465 +0000 UTC m=+148.845861151" lastFinishedPulling="2026-02-19 10:12:31.046788614 +0000 UTC m=+239.573253300" observedRunningTime="2026-02-19 10:12:32.09691791 +0000 UTC m=+240.623382596" watchObservedRunningTime="2026-02-19 10:12:32.097997026 +0000 UTC m=+240.624461712" Feb 19 10:12:34 crc kubenswrapper[4811]: I0219 10:12:34.212017 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" podUID="2f61e647-ba04-4c85-9e4e-b58c3823b3a2" containerName="oauth-openshift" containerID="cri-o://b985de480614b363736d5a52817dc44700a02fc5707514fa154cc35743d3d0a3" gracePeriod=15 Feb 19 10:12:35 crc kubenswrapper[4811]: I0219 10:12:35.597189 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pctn2" Feb 19 10:12:35 crc kubenswrapper[4811]: I0219 10:12:35.597962 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pctn2" Feb 19 10:12:35 crc kubenswrapper[4811]: I0219 10:12:35.646425 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pctn2" Feb 19 10:12:35 crc kubenswrapper[4811]: I0219 10:12:35.708234 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ksjbn" Feb 19 10:12:35 crc kubenswrapper[4811]: I0219 10:12:35.708282 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ksjbn" Feb 19 10:12:35 crc kubenswrapper[4811]: I0219 10:12:35.744124 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cpcsw" Feb 19 10:12:35 crc kubenswrapper[4811]: I0219 10:12:35.763343 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ksjbn" Feb 19 10:12:35 crc kubenswrapper[4811]: I0219 10:12:35.832270 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k4gqt" Feb 19 10:12:35 crc kubenswrapper[4811]: I0219 10:12:35.832514 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k4gqt" Feb 19 10:12:35 crc kubenswrapper[4811]: I0219 10:12:35.869591 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k4gqt" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.055218 4811 generic.go:334] "Generic (PLEG): container finished" podID="2f61e647-ba04-4c85-9e4e-b58c3823b3a2" containerID="b985de480614b363736d5a52817dc44700a02fc5707514fa154cc35743d3d0a3" exitCode=0 Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.055256 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" event={"ID":"2f61e647-ba04-4c85-9e4e-b58c3823b3a2","Type":"ContainerDied","Data":"b985de480614b363736d5a52817dc44700a02fc5707514fa154cc35743d3d0a3"} Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.090237 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ksjbn" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.090723 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k4gqt" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.101370 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pctn2" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.406722 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.443948 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n"] Feb 19 10:12:36 crc kubenswrapper[4811]: E0219 10:12:36.444237 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f61e647-ba04-4c85-9e4e-b58c3823b3a2" containerName="oauth-openshift" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.444254 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f61e647-ba04-4c85-9e4e-b58c3823b3a2" containerName="oauth-openshift" Feb 19 10:12:36 crc kubenswrapper[4811]: E0219 10:12:36.444271 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02406e89-671a-469f-bddc-f9a138607bb3" containerName="pruner" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.444279 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="02406e89-671a-469f-bddc-f9a138607bb3" containerName="pruner" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.444404 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="02406e89-671a-469f-bddc-f9a138607bb3" containerName="pruner" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.444417 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f61e647-ba04-4c85-9e4e-b58c3823b3a2" containerName="oauth-openshift" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.444913 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.458298 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n"] Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.569728 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-service-ca\") pod \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.569820 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcks8\" (UniqueName: \"kubernetes.io/projected/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-kube-api-access-xcks8\") pod \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.569852 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-trusted-ca-bundle\") pod \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.569896 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-user-idp-0-file-data\") pod \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.570009 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-audit-dir\") pod \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.570032 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-ocp-branding-template\") pod \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.570122 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-session\") pod \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.570149 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-audit-policies\") pod \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.570177 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-cliconfig\") pod \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.570202 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-user-template-error\") pod \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.570225 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-router-certs\") pod \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.570256 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-user-template-provider-selection\") pod \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.570287 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-user-template-login\") pod \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.570351 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-serving-cert\") pod \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\" (UID: \"2f61e647-ba04-4c85-9e4e-b58c3823b3a2\") " Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.570622 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1fb8f251-0f82-4f32-a893-0c56c5c08733-v4-0-config-system-service-ca\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.570661 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1fb8f251-0f82-4f32-a893-0c56c5c08733-audit-dir\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.570693 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1fb8f251-0f82-4f32-a893-0c56c5c08733-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.570720 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fb8f251-0f82-4f32-a893-0c56c5c08733-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.570748 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1fb8f251-0f82-4f32-a893-0c56c5c08733-audit-policies\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.570780 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1fb8f251-0f82-4f32-a893-0c56c5c08733-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.570819 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1fb8f251-0f82-4f32-a893-0c56c5c08733-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.570857 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1fb8f251-0f82-4f32-a893-0c56c5c08733-v4-0-config-system-router-certs\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.570889 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1fb8f251-0f82-4f32-a893-0c56c5c08733-v4-0-config-user-template-error\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.570927 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fb8f251-0f82-4f32-a893-0c56c5c08733-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.570953 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1fb8f251-0f82-4f32-a893-0c56c5c08733-v4-0-config-system-session\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.570976 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1fb8f251-0f82-4f32-a893-0c56c5c08733-v4-0-config-user-template-login\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.571003 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlkpt\" (UniqueName: \"kubernetes.io/projected/1fb8f251-0f82-4f32-a893-0c56c5c08733-kube-api-access-mlkpt\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.571026 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1fb8f251-0f82-4f32-a893-0c56c5c08733-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.570707 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "2f61e647-ba04-4c85-9e4e-b58c3823b3a2" (UID: "2f61e647-ba04-4c85-9e4e-b58c3823b3a2"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.571771 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "2f61e647-ba04-4c85-9e4e-b58c3823b3a2" (UID: "2f61e647-ba04-4c85-9e4e-b58c3823b3a2"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.572112 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "2f61e647-ba04-4c85-9e4e-b58c3823b3a2" (UID: "2f61e647-ba04-4c85-9e4e-b58c3823b3a2"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.572778 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "2f61e647-ba04-4c85-9e4e-b58c3823b3a2" (UID: "2f61e647-ba04-4c85-9e4e-b58c3823b3a2"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.573079 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "2f61e647-ba04-4c85-9e4e-b58c3823b3a2" (UID: "2f61e647-ba04-4c85-9e4e-b58c3823b3a2"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.576509 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "2f61e647-ba04-4c85-9e4e-b58c3823b3a2" (UID: "2f61e647-ba04-4c85-9e4e-b58c3823b3a2"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.577731 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-kube-api-access-xcks8" (OuterVolumeSpecName: "kube-api-access-xcks8") pod "2f61e647-ba04-4c85-9e4e-b58c3823b3a2" (UID: "2f61e647-ba04-4c85-9e4e-b58c3823b3a2"). InnerVolumeSpecName "kube-api-access-xcks8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.578006 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "2f61e647-ba04-4c85-9e4e-b58c3823b3a2" (UID: "2f61e647-ba04-4c85-9e4e-b58c3823b3a2"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.580713 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "2f61e647-ba04-4c85-9e4e-b58c3823b3a2" (UID: "2f61e647-ba04-4c85-9e4e-b58c3823b3a2"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.581105 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "2f61e647-ba04-4c85-9e4e-b58c3823b3a2" (UID: "2f61e647-ba04-4c85-9e4e-b58c3823b3a2"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.581625 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "2f61e647-ba04-4c85-9e4e-b58c3823b3a2" (UID: "2f61e647-ba04-4c85-9e4e-b58c3823b3a2"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.581924 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "2f61e647-ba04-4c85-9e4e-b58c3823b3a2" (UID: "2f61e647-ba04-4c85-9e4e-b58c3823b3a2"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.582211 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "2f61e647-ba04-4c85-9e4e-b58c3823b3a2" (UID: "2f61e647-ba04-4c85-9e4e-b58c3823b3a2"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.585360 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "2f61e647-ba04-4c85-9e4e-b58c3823b3a2" (UID: "2f61e647-ba04-4c85-9e4e-b58c3823b3a2"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.672109 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1fb8f251-0f82-4f32-a893-0c56c5c08733-v4-0-config-system-service-ca\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.672188 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1fb8f251-0f82-4f32-a893-0c56c5c08733-audit-dir\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.672218 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1fb8f251-0f82-4f32-a893-0c56c5c08733-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.672246 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fb8f251-0f82-4f32-a893-0c56c5c08733-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.672268 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1fb8f251-0f82-4f32-a893-0c56c5c08733-audit-policies\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.672293 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1fb8f251-0f82-4f32-a893-0c56c5c08733-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.672330 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1fb8f251-0f82-4f32-a893-0c56c5c08733-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.672359 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1fb8f251-0f82-4f32-a893-0c56c5c08733-v4-0-config-system-router-certs\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.672383 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1fb8f251-0f82-4f32-a893-0c56c5c08733-v4-0-config-user-template-error\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.672413 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fb8f251-0f82-4f32-a893-0c56c5c08733-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.672442 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1fb8f251-0f82-4f32-a893-0c56c5c08733-v4-0-config-system-session\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.672499 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1fb8f251-0f82-4f32-a893-0c56c5c08733-v4-0-config-user-template-login\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.672533 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlkpt\" (UniqueName: \"kubernetes.io/projected/1fb8f251-0f82-4f32-a893-0c56c5c08733-kube-api-access-mlkpt\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.672561 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1fb8f251-0f82-4f32-a893-0c56c5c08733-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.672611 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.672624 4811 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.672636 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.672651 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.672663 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.672679 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.672693 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.672705 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.672717 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.672729 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcks8\" (UniqueName: \"kubernetes.io/projected/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-kube-api-access-xcks8\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.672741 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.672755 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.672767 4811 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.672779 4811 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2f61e647-ba04-4c85-9e4e-b58c3823b3a2-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.673747 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1fb8f251-0f82-4f32-a893-0c56c5c08733-audit-dir\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.673798 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1fb8f251-0f82-4f32-a893-0c56c5c08733-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.674583 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1fb8f251-0f82-4f32-a893-0c56c5c08733-v4-0-config-system-service-ca\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.675187 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1fb8f251-0f82-4f32-a893-0c56c5c08733-audit-policies\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.675700 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fb8f251-0f82-4f32-a893-0c56c5c08733-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.676831 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1fb8f251-0f82-4f32-a893-0c56c5c08733-v4-0-config-user-template-error\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.678016 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1fb8f251-0f82-4f32-a893-0c56c5c08733-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.678425 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fb8f251-0f82-4f32-a893-0c56c5c08733-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.679575 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1fb8f251-0f82-4f32-a893-0c56c5c08733-v4-0-config-system-session\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.679971 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1fb8f251-0f82-4f32-a893-0c56c5c08733-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.681259 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1fb8f251-0f82-4f32-a893-0c56c5c08733-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.681512 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1fb8f251-0f82-4f32-a893-0c56c5c08733-v4-0-config-system-router-certs\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.681957 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1fb8f251-0f82-4f32-a893-0c56c5c08733-v4-0-config-user-template-login\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.695252 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlkpt\" (UniqueName: \"kubernetes.io/projected/1fb8f251-0f82-4f32-a893-0c56c5c08733-kube-api-access-mlkpt\") pod \"oauth-openshift-5bf578b4ff-q4v2n\" (UID: \"1fb8f251-0f82-4f32-a893-0c56c5c08733\") " pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:36 crc kubenswrapper[4811]: I0219 10:12:36.758818 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:37 crc kubenswrapper[4811]: I0219 10:12:37.065730 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" event={"ID":"2f61e647-ba04-4c85-9e4e-b58c3823b3a2","Type":"ContainerDied","Data":"fc0de6666549961ff86be650428caa06eefe9d09e2a99faf578061e4fa6df9ec"} Feb 19 10:12:37 crc kubenswrapper[4811]: I0219 10:12:37.066212 4811 scope.go:117] "RemoveContainer" containerID="b985de480614b363736d5a52817dc44700a02fc5707514fa154cc35743d3d0a3" Feb 19 10:12:37 crc kubenswrapper[4811]: I0219 10:12:37.066007 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8wjls" Feb 19 10:12:37 crc kubenswrapper[4811]: I0219 10:12:37.086341 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8wjls"] Feb 19 10:12:37 crc kubenswrapper[4811]: I0219 10:12:37.090652 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8wjls"] Feb 19 10:12:37 crc kubenswrapper[4811]: I0219 10:12:37.159703 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n"] Feb 19 10:12:37 crc kubenswrapper[4811]: I0219 10:12:37.412121 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-79ql7" Feb 19 10:12:37 crc kubenswrapper[4811]: I0219 10:12:37.412430 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-79ql7" Feb 19 10:12:37 crc kubenswrapper[4811]: I0219 10:12:37.448140 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-79ql7" Feb 19 10:12:37 crc kubenswrapper[4811]: I0219 10:12:37.878740 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q7qnh" Feb 19 10:12:37 crc kubenswrapper[4811]: I0219 10:12:37.878885 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q7qnh" Feb 19 10:12:37 crc kubenswrapper[4811]: I0219 10:12:37.919718 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q7qnh" Feb 19 10:12:37 crc kubenswrapper[4811]: I0219 10:12:37.926482 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ksjbn"] Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.074386 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" event={"ID":"1fb8f251-0f82-4f32-a893-0c56c5c08733","Type":"ContainerStarted","Data":"d481d606e3e9149c77fc605be953b8357412fe17490bea2d99a2673dfe782df2"} Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.074483 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" event={"ID":"1fb8f251-0f82-4f32-a893-0c56c5c08733","Type":"ContainerStarted","Data":"d0520c943a60d8d83659c8695c476a34315885e19729eab841e2cc1fdf84c1bd"} Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.075632 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ksjbn" podUID="ca27a1be-f8a5-447f-b61f-da206a041290" containerName="registry-server" containerID="cri-o://852f99f412b6570baa046cb13b53118c87807dfb31e8424edf6e3346efbf488e" gracePeriod=2 Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.099687 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" podStartSLOduration=29.099666695 podStartE2EDuration="29.099666695s" podCreationTimestamp="2026-02-19 10:12:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:12:38.097875713 +0000 UTC m=+246.624340399" watchObservedRunningTime="2026-02-19 10:12:38.099666695 +0000 UTC m=+246.626131381" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.126487 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q7qnh" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.130229 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-79ql7" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.454576 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ksjbn" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.529482 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k4gqt"] Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.591475 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f61e647-ba04-4c85-9e4e-b58c3823b3a2" path="/var/lib/kubelet/pods/2f61e647-ba04-4c85-9e4e-b58c3823b3a2/volumes" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.597913 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca27a1be-f8a5-447f-b61f-da206a041290-utilities\") pod \"ca27a1be-f8a5-447f-b61f-da206a041290\" (UID: \"ca27a1be-f8a5-447f-b61f-da206a041290\") " Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.597979 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca27a1be-f8a5-447f-b61f-da206a041290-catalog-content\") pod \"ca27a1be-f8a5-447f-b61f-da206a041290\" (UID: \"ca27a1be-f8a5-447f-b61f-da206a041290\") " Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.598044 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qm4rp\" (UniqueName: \"kubernetes.io/projected/ca27a1be-f8a5-447f-b61f-da206a041290-kube-api-access-qm4rp\") pod \"ca27a1be-f8a5-447f-b61f-da206a041290\" (UID: \"ca27a1be-f8a5-447f-b61f-da206a041290\") " Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.600057 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca27a1be-f8a5-447f-b61f-da206a041290-utilities" (OuterVolumeSpecName: "utilities") pod "ca27a1be-f8a5-447f-b61f-da206a041290" (UID: "ca27a1be-f8a5-447f-b61f-da206a041290"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.607877 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca27a1be-f8a5-447f-b61f-da206a041290-kube-api-access-qm4rp" (OuterVolumeSpecName: "kube-api-access-qm4rp") pod "ca27a1be-f8a5-447f-b61f-da206a041290" (UID: "ca27a1be-f8a5-447f-b61f-da206a041290"). InnerVolumeSpecName "kube-api-access-qm4rp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.627058 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5kfzw" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.664726 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca27a1be-f8a5-447f-b61f-da206a041290-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca27a1be-f8a5-447f-b61f-da206a041290" (UID: "ca27a1be-f8a5-447f-b61f-da206a041290"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.699981 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca27a1be-f8a5-447f-b61f-da206a041290-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.700027 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca27a1be-f8a5-447f-b61f-da206a041290-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.700042 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qm4rp\" (UniqueName: \"kubernetes.io/projected/ca27a1be-f8a5-447f-b61f-da206a041290-kube-api-access-qm4rp\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.704802 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5kfzw" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.761785 4811 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.762477 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53" gracePeriod=15 Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.762561 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6" gracePeriod=15 Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.762512 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://6a4b8138d1d6b88a57900c770e6553635cf6712d2a43061747bf2e1fe18f4a34" gracePeriod=15 Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.762537 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692" gracePeriod=15 Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.762789 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0" gracePeriod=15 Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.762931 4811 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 10:12:38 crc kubenswrapper[4811]: E0219 10:12:38.763195 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.763210 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 10:12:38 crc kubenswrapper[4811]: E0219 10:12:38.763218 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.763224 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 10:12:38 crc kubenswrapper[4811]: E0219 10:12:38.763234 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.763240 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 10:12:38 crc kubenswrapper[4811]: E0219 10:12:38.763248 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.763253 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 10:12:38 crc kubenswrapper[4811]: E0219 10:12:38.763260 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.763266 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 10:12:38 crc kubenswrapper[4811]: E0219 10:12:38.763274 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.763279 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 19 10:12:38 crc kubenswrapper[4811]: E0219 10:12:38.763291 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.763296 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 10:12:38 crc kubenswrapper[4811]: E0219 10:12:38.763304 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.763309 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 10:12:38 crc kubenswrapper[4811]: E0219 10:12:38.763317 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca27a1be-f8a5-447f-b61f-da206a041290" containerName="registry-server" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.763322 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca27a1be-f8a5-447f-b61f-da206a041290" containerName="registry-server" Feb 19 10:12:38 crc kubenswrapper[4811]: E0219 10:12:38.763329 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca27a1be-f8a5-447f-b61f-da206a041290" containerName="extract-content" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.763335 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca27a1be-f8a5-447f-b61f-da206a041290" containerName="extract-content" Feb 19 10:12:38 crc kubenswrapper[4811]: E0219 10:12:38.763345 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca27a1be-f8a5-447f-b61f-da206a041290" containerName="extract-utilities" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.763351 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca27a1be-f8a5-447f-b61f-da206a041290" containerName="extract-utilities" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.763511 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca27a1be-f8a5-447f-b61f-da206a041290" containerName="registry-server" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.763524 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.763534 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.763541 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.763551 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.763560 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.763570 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.763579 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.768528 4811 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.769641 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.772666 4811 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.902294 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.902405 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.902430 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.902536 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.902566 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.902598 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.902622 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.902643 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.990197 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zq68z" Feb 19 10:12:38 crc kubenswrapper[4811]: I0219 10:12:38.990399 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zq68z" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.004128 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.004183 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.004220 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.004242 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.004271 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.004289 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.004306 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.004343 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.004424 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.004479 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.004502 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.004521 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.004540 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.004560 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.004580 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.005399 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 10:12:39 crc kubenswrapper[4811]: E0219 10:12:39.005432 4811 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:39 crc kubenswrapper[4811]: E0219 10:12:39.006131 4811 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:39 crc kubenswrapper[4811]: E0219 10:12:39.006965 4811 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:39 crc kubenswrapper[4811]: E0219 10:12:39.007370 4811 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:39 crc kubenswrapper[4811]: E0219 10:12:39.007875 4811 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.007916 4811 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 19 10:12:39 crc kubenswrapper[4811]: E0219 10:12:39.008243 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="200ms" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.036498 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zq68z" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.037190 4811 status_manager.go:851] "Failed to get status for pod" podUID="f528e872-8fd6-4cfc-b4d5-129433fed8f6" pod="openshift-marketplace/redhat-operators-zq68z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zq68z\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.081521 4811 generic.go:334] "Generic (PLEG): container finished" podID="ca27a1be-f8a5-447f-b61f-da206a041290" containerID="852f99f412b6570baa046cb13b53118c87807dfb31e8424edf6e3346efbf488e" exitCode=0 Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.081578 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ksjbn" event={"ID":"ca27a1be-f8a5-447f-b61f-da206a041290","Type":"ContainerDied","Data":"852f99f412b6570baa046cb13b53118c87807dfb31e8424edf6e3346efbf488e"} Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.081589 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ksjbn" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.081613 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ksjbn" event={"ID":"ca27a1be-f8a5-447f-b61f-da206a041290","Type":"ContainerDied","Data":"6283b03238859915772456f8ecf843e140409957420ab999199a77ce3cb63a9b"} Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.081632 4811 scope.go:117] "RemoveContainer" containerID="852f99f412b6570baa046cb13b53118c87807dfb31e8424edf6e3346efbf488e" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.082711 4811 status_manager.go:851] "Failed to get status for pod" podUID="f528e872-8fd6-4cfc-b4d5-129433fed8f6" pod="openshift-marketplace/redhat-operators-zq68z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zq68z\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.083401 4811 status_manager.go:851] "Failed to get status for pod" podUID="ca27a1be-f8a5-447f-b61f-da206a041290" pod="openshift-marketplace/certified-operators-ksjbn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ksjbn\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.083820 4811 generic.go:334] "Generic (PLEG): container finished" podID="7e395259-92d6-4fa4-8eb1-da9246b3626b" containerID="75550b433f5513e6b08d573fe5e471ddbbc9ff9e6bceb340111d036c831bc4bf" exitCode=0 Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.083884 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7e395259-92d6-4fa4-8eb1-da9246b3626b","Type":"ContainerDied","Data":"75550b433f5513e6b08d573fe5e471ddbbc9ff9e6bceb340111d036c831bc4bf"} Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.084354 4811 status_manager.go:851] "Failed to get status for pod" podUID="f528e872-8fd6-4cfc-b4d5-129433fed8f6" pod="openshift-marketplace/redhat-operators-zq68z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zq68z\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.084790 4811 status_manager.go:851] "Failed to get status for pod" podUID="7e395259-92d6-4fa4-8eb1-da9246b3626b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.085051 4811 status_manager.go:851] "Failed to get status for pod" podUID="ca27a1be-f8a5-447f-b61f-da206a041290" pod="openshift-marketplace/certified-operators-ksjbn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ksjbn\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.085775 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.086645 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.087237 4811 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6a4b8138d1d6b88a57900c770e6553635cf6712d2a43061747bf2e1fe18f4a34" exitCode=0 Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.087255 4811 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692" exitCode=0 Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.087265 4811 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0" exitCode=0 Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.087274 4811 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6" exitCode=2 Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.089886 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k4gqt" podUID="c4c2ec9c-52c4-442e-adf8-eddb1d74d250" containerName="registry-server" containerID="cri-o://aa02292d3821189d1853af5810111e4fc0808ee963d2effbe4cd2ac181cbe891" gracePeriod=2 Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.090578 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.090928 4811 status_manager.go:851] "Failed to get status for pod" podUID="ca27a1be-f8a5-447f-b61f-da206a041290" pod="openshift-marketplace/certified-operators-ksjbn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ksjbn\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.091227 4811 status_manager.go:851] "Failed to get status for pod" podUID="c4c2ec9c-52c4-442e-adf8-eddb1d74d250" pod="openshift-marketplace/community-operators-k4gqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k4gqt\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.091840 4811 status_manager.go:851] "Failed to get status for pod" podUID="f528e872-8fd6-4cfc-b4d5-129433fed8f6" pod="openshift-marketplace/redhat-operators-zq68z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zq68z\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:39 crc kubenswrapper[4811]: E0219 10:12:39.091795 4811 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.194:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-k4gqt.18959e30629781e4 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-k4gqt,UID:c4c2ec9c-52c4-442e-adf8-eddb1d74d250,APIVersion:v1,ResourceVersion:27910,FieldPath:spec.containers{registry-server},},Reason:Killing,Message:Stopping container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 10:12:39.08987338 +0000 UTC m=+247.616338066,LastTimestamp:2026-02-19 10:12:39.08987338 +0000 UTC m=+247.616338066,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.092201 4811 status_manager.go:851] "Failed to get status for pod" podUID="7e395259-92d6-4fa4-8eb1-da9246b3626b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.094601 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.095038 4811 status_manager.go:851] "Failed to get status for pod" podUID="f528e872-8fd6-4cfc-b4d5-129433fed8f6" pod="openshift-marketplace/redhat-operators-zq68z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zq68z\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.095227 4811 status_manager.go:851] "Failed to get status for pod" podUID="7e395259-92d6-4fa4-8eb1-da9246b3626b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.095425 4811 status_manager.go:851] "Failed to get status for pod" podUID="ca27a1be-f8a5-447f-b61f-da206a041290" pod="openshift-marketplace/certified-operators-ksjbn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ksjbn\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.095660 4811 status_manager.go:851] "Failed to get status for pod" podUID="c4c2ec9c-52c4-442e-adf8-eddb1d74d250" pod="openshift-marketplace/community-operators-k4gqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k4gqt\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.095859 4811 status_manager.go:851] "Failed to get status for pod" podUID="1fb8f251-0f82-4f32-a893-0c56c5c08733" pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5bf578b4ff-q4v2n\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.100038 4811 scope.go:117] "RemoveContainer" containerID="82b0866ab8a07990353fa81c9d1ee85674fe213ac9e67dece09a22de4039857b" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.111797 4811 status_manager.go:851] "Failed to get status for pod" podUID="c4c2ec9c-52c4-442e-adf8-eddb1d74d250" pod="openshift-marketplace/community-operators-k4gqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k4gqt\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.112426 4811 status_manager.go:851] "Failed to get status for pod" podUID="1fb8f251-0f82-4f32-a893-0c56c5c08733" pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5bf578b4ff-q4v2n\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.113134 4811 status_manager.go:851] "Failed to get status for pod" podUID="f528e872-8fd6-4cfc-b4d5-129433fed8f6" pod="openshift-marketplace/redhat-operators-zq68z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zq68z\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.113380 4811 status_manager.go:851] "Failed to get status for pod" podUID="7e395259-92d6-4fa4-8eb1-da9246b3626b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.113713 4811 status_manager.go:851] "Failed to get status for pod" podUID="ca27a1be-f8a5-447f-b61f-da206a041290" pod="openshift-marketplace/certified-operators-ksjbn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ksjbn\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.115433 4811 scope.go:117] "RemoveContainer" containerID="59356221160a5e3471812120a030d86154a810af003a1f58a51240fb86fcc7b8" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.129158 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zq68z" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.129922 4811 status_manager.go:851] "Failed to get status for pod" podUID="1fb8f251-0f82-4f32-a893-0c56c5c08733" pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5bf578b4ff-q4v2n\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.130875 4811 status_manager.go:851] "Failed to get status for pod" podUID="f528e872-8fd6-4cfc-b4d5-129433fed8f6" pod="openshift-marketplace/redhat-operators-zq68z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zq68z\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.131726 4811 status_manager.go:851] "Failed to get status for pod" podUID="7e395259-92d6-4fa4-8eb1-da9246b3626b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.132076 4811 status_manager.go:851] "Failed to get status for pod" podUID="ca27a1be-f8a5-447f-b61f-da206a041290" pod="openshift-marketplace/certified-operators-ksjbn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ksjbn\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.132675 4811 status_manager.go:851] "Failed to get status for pod" podUID="c4c2ec9c-52c4-442e-adf8-eddb1d74d250" pod="openshift-marketplace/community-operators-k4gqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k4gqt\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.190646 4811 scope.go:117] "RemoveContainer" containerID="852f99f412b6570baa046cb13b53118c87807dfb31e8424edf6e3346efbf488e" Feb 19 10:12:39 crc kubenswrapper[4811]: E0219 10:12:39.191814 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"852f99f412b6570baa046cb13b53118c87807dfb31e8424edf6e3346efbf488e\": container with ID starting with 852f99f412b6570baa046cb13b53118c87807dfb31e8424edf6e3346efbf488e not found: ID does not exist" containerID="852f99f412b6570baa046cb13b53118c87807dfb31e8424edf6e3346efbf488e" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.191882 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"852f99f412b6570baa046cb13b53118c87807dfb31e8424edf6e3346efbf488e"} err="failed to get container status \"852f99f412b6570baa046cb13b53118c87807dfb31e8424edf6e3346efbf488e\": rpc error: code = NotFound desc = could not find container \"852f99f412b6570baa046cb13b53118c87807dfb31e8424edf6e3346efbf488e\": container with ID starting with 852f99f412b6570baa046cb13b53118c87807dfb31e8424edf6e3346efbf488e not found: ID does not exist" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.191920 4811 scope.go:117] "RemoveContainer" containerID="82b0866ab8a07990353fa81c9d1ee85674fe213ac9e67dece09a22de4039857b" Feb 19 10:12:39 crc kubenswrapper[4811]: E0219 10:12:39.192903 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82b0866ab8a07990353fa81c9d1ee85674fe213ac9e67dece09a22de4039857b\": container with ID starting with 82b0866ab8a07990353fa81c9d1ee85674fe213ac9e67dece09a22de4039857b not found: ID does not exist" containerID="82b0866ab8a07990353fa81c9d1ee85674fe213ac9e67dece09a22de4039857b" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.192946 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82b0866ab8a07990353fa81c9d1ee85674fe213ac9e67dece09a22de4039857b"} err="failed to get container status \"82b0866ab8a07990353fa81c9d1ee85674fe213ac9e67dece09a22de4039857b\": rpc error: code = NotFound desc = could not find container \"82b0866ab8a07990353fa81c9d1ee85674fe213ac9e67dece09a22de4039857b\": container with ID starting with 82b0866ab8a07990353fa81c9d1ee85674fe213ac9e67dece09a22de4039857b not found: ID does not exist" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.192971 4811 scope.go:117] "RemoveContainer" containerID="59356221160a5e3471812120a030d86154a810af003a1f58a51240fb86fcc7b8" Feb 19 10:12:39 crc kubenswrapper[4811]: E0219 10:12:39.194003 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59356221160a5e3471812120a030d86154a810af003a1f58a51240fb86fcc7b8\": container with ID starting with 59356221160a5e3471812120a030d86154a810af003a1f58a51240fb86fcc7b8 not found: ID does not exist" containerID="59356221160a5e3471812120a030d86154a810af003a1f58a51240fb86fcc7b8" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.194046 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59356221160a5e3471812120a030d86154a810af003a1f58a51240fb86fcc7b8"} err="failed to get container status \"59356221160a5e3471812120a030d86154a810af003a1f58a51240fb86fcc7b8\": rpc error: code = NotFound desc = could not find container \"59356221160a5e3471812120a030d86154a810af003a1f58a51240fb86fcc7b8\": container with ID starting with 59356221160a5e3471812120a030d86154a810af003a1f58a51240fb86fcc7b8 not found: ID does not exist" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.194067 4811 scope.go:117] "RemoveContainer" containerID="5a9dd3fb4c4eac4406f2df15c814d034d16941e0615bdb5565faa78a230bb49a" Feb 19 10:12:39 crc kubenswrapper[4811]: E0219 10:12:39.209143 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="400ms" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.464362 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k4gqt" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.465470 4811 status_manager.go:851] "Failed to get status for pod" podUID="1fb8f251-0f82-4f32-a893-0c56c5c08733" pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5bf578b4ff-q4v2n\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.466093 4811 status_manager.go:851] "Failed to get status for pod" podUID="7e395259-92d6-4fa4-8eb1-da9246b3626b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.466908 4811 status_manager.go:851] "Failed to get status for pod" podUID="f528e872-8fd6-4cfc-b4d5-129433fed8f6" pod="openshift-marketplace/redhat-operators-zq68z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zq68z\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.467316 4811 status_manager.go:851] "Failed to get status for pod" podUID="ca27a1be-f8a5-447f-b61f-da206a041290" pod="openshift-marketplace/certified-operators-ksjbn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ksjbn\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.467611 4811 status_manager.go:851] "Failed to get status for pod" podUID="c4c2ec9c-52c4-442e-adf8-eddb1d74d250" pod="openshift-marketplace/community-operators-k4gqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k4gqt\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:39 crc kubenswrapper[4811]: E0219 10:12:39.610120 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="800ms" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.610722 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h87cn\" (UniqueName: \"kubernetes.io/projected/c4c2ec9c-52c4-442e-adf8-eddb1d74d250-kube-api-access-h87cn\") pod \"c4c2ec9c-52c4-442e-adf8-eddb1d74d250\" (UID: \"c4c2ec9c-52c4-442e-adf8-eddb1d74d250\") " Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.610861 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4c2ec9c-52c4-442e-adf8-eddb1d74d250-catalog-content\") pod \"c4c2ec9c-52c4-442e-adf8-eddb1d74d250\" (UID: \"c4c2ec9c-52c4-442e-adf8-eddb1d74d250\") " Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.610993 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4c2ec9c-52c4-442e-adf8-eddb1d74d250-utilities\") pod \"c4c2ec9c-52c4-442e-adf8-eddb1d74d250\" (UID: \"c4c2ec9c-52c4-442e-adf8-eddb1d74d250\") " Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.612154 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4c2ec9c-52c4-442e-adf8-eddb1d74d250-utilities" (OuterVolumeSpecName: "utilities") pod "c4c2ec9c-52c4-442e-adf8-eddb1d74d250" (UID: "c4c2ec9c-52c4-442e-adf8-eddb1d74d250"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.619408 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4c2ec9c-52c4-442e-adf8-eddb1d74d250-kube-api-access-h87cn" (OuterVolumeSpecName: "kube-api-access-h87cn") pod "c4c2ec9c-52c4-442e-adf8-eddb1d74d250" (UID: "c4c2ec9c-52c4-442e-adf8-eddb1d74d250"). InnerVolumeSpecName "kube-api-access-h87cn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.660287 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4c2ec9c-52c4-442e-adf8-eddb1d74d250-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4c2ec9c-52c4-442e-adf8-eddb1d74d250" (UID: "c4c2ec9c-52c4-442e-adf8-eddb1d74d250"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.712821 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4c2ec9c-52c4-442e-adf8-eddb1d74d250-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.713232 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h87cn\" (UniqueName: \"kubernetes.io/projected/c4c2ec9c-52c4-442e-adf8-eddb1d74d250-kube-api-access-h87cn\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:39 crc kubenswrapper[4811]: I0219 10:12:39.713256 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4c2ec9c-52c4-442e-adf8-eddb1d74d250-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:39 crc kubenswrapper[4811]: E0219 10:12:39.768354 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:12:39Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:12:39Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:12:39Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:12:39Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:0dcba5d04f25f6e382ffecdd94057bd8a99cffb6a00a8c7da186e9871ae459ea\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:92f996986deaacc20f2d7929be6465ef80f234c7c73757735ab489489ad69464\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1702667973},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:36d5e112bc7eb9123ef90a1acf77172cfc399f38288a5ea7de142760e8205d3e\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:42adcd31d393f3ca38b294ea506d3c0ed3d3af2db3e0a28b99e17b3e15407989\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1232883390},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:1216c1149e60c19986daabe98d95d067b485439895e2ff2854d94db8824f3715\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:7aa8c7b527036d45810192b5cc55665e1b5225be95abc5047d29828f793b60d0\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1210186952},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:518982b9ad8a8bfb7bb3b4216b235cac99e126df3bb48e390b36064560c76b83\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b3293b04e31c8e67c885f77e0ad2ee994295afde7c42cb9761c7090ae0cdb3f8\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1202767548},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:39 crc kubenswrapper[4811]: E0219 10:12:39.769157 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:39 crc kubenswrapper[4811]: E0219 10:12:39.769991 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:39 crc kubenswrapper[4811]: E0219 10:12:39.770570 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:39 crc kubenswrapper[4811]: E0219 10:12:39.770918 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:39 crc kubenswrapper[4811]: E0219 10:12:39.770958 4811 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 10:12:40 crc kubenswrapper[4811]: I0219 10:12:40.098741 4811 generic.go:334] "Generic (PLEG): container finished" podID="c4c2ec9c-52c4-442e-adf8-eddb1d74d250" containerID="aa02292d3821189d1853af5810111e4fc0808ee963d2effbe4cd2ac181cbe891" exitCode=0 Feb 19 10:12:40 crc kubenswrapper[4811]: I0219 10:12:40.098814 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k4gqt" event={"ID":"c4c2ec9c-52c4-442e-adf8-eddb1d74d250","Type":"ContainerDied","Data":"aa02292d3821189d1853af5810111e4fc0808ee963d2effbe4cd2ac181cbe891"} Feb 19 10:12:40 crc kubenswrapper[4811]: I0219 10:12:40.098846 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k4gqt" event={"ID":"c4c2ec9c-52c4-442e-adf8-eddb1d74d250","Type":"ContainerDied","Data":"01c20f173fbc9e3303a9ade69b9aa887ada0fbb98b575561ef1441f59609b045"} Feb 19 10:12:40 crc kubenswrapper[4811]: I0219 10:12:40.098867 4811 scope.go:117] "RemoveContainer" containerID="aa02292d3821189d1853af5810111e4fc0808ee963d2effbe4cd2ac181cbe891" Feb 19 10:12:40 crc kubenswrapper[4811]: I0219 10:12:40.098990 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k4gqt" Feb 19 10:12:40 crc kubenswrapper[4811]: I0219 10:12:40.100149 4811 status_manager.go:851] "Failed to get status for pod" podUID="f528e872-8fd6-4cfc-b4d5-129433fed8f6" pod="openshift-marketplace/redhat-operators-zq68z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zq68z\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:40 crc kubenswrapper[4811]: I0219 10:12:40.100522 4811 status_manager.go:851] "Failed to get status for pod" podUID="7e395259-92d6-4fa4-8eb1-da9246b3626b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:40 crc kubenswrapper[4811]: I0219 10:12:40.101699 4811 status_manager.go:851] "Failed to get status for pod" podUID="ca27a1be-f8a5-447f-b61f-da206a041290" pod="openshift-marketplace/certified-operators-ksjbn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ksjbn\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:40 crc kubenswrapper[4811]: I0219 10:12:40.102019 4811 status_manager.go:851] "Failed to get status for pod" podUID="c4c2ec9c-52c4-442e-adf8-eddb1d74d250" pod="openshift-marketplace/community-operators-k4gqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k4gqt\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:40 crc kubenswrapper[4811]: I0219 10:12:40.102277 4811 status_manager.go:851] "Failed to get status for pod" podUID="1fb8f251-0f82-4f32-a893-0c56c5c08733" pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5bf578b4ff-q4v2n\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:40 crc kubenswrapper[4811]: I0219 10:12:40.107184 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 10:12:40 crc kubenswrapper[4811]: I0219 10:12:40.118124 4811 scope.go:117] "RemoveContainer" containerID="9968c953ef73f432478604b861bd1118775f858ce63c33236d4a80f4b30593e0" Feb 19 10:12:40 crc kubenswrapper[4811]: I0219 10:12:40.119083 4811 status_manager.go:851] "Failed to get status for pod" podUID="7e395259-92d6-4fa4-8eb1-da9246b3626b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:40 crc kubenswrapper[4811]: I0219 10:12:40.119399 4811 status_manager.go:851] "Failed to get status for pod" podUID="f528e872-8fd6-4cfc-b4d5-129433fed8f6" pod="openshift-marketplace/redhat-operators-zq68z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zq68z\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:40 crc kubenswrapper[4811]: I0219 10:12:40.119689 4811 status_manager.go:851] "Failed to get status for pod" podUID="ca27a1be-f8a5-447f-b61f-da206a041290" pod="openshift-marketplace/certified-operators-ksjbn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ksjbn\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:40 crc kubenswrapper[4811]: I0219 10:12:40.120015 4811 status_manager.go:851] "Failed to get status for pod" podUID="c4c2ec9c-52c4-442e-adf8-eddb1d74d250" pod="openshift-marketplace/community-operators-k4gqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k4gqt\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:40 crc kubenswrapper[4811]: I0219 10:12:40.120368 4811 status_manager.go:851] "Failed to get status for pod" podUID="1fb8f251-0f82-4f32-a893-0c56c5c08733" pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5bf578b4ff-q4v2n\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:40 crc kubenswrapper[4811]: I0219 10:12:40.133351 4811 scope.go:117] "RemoveContainer" containerID="9991986c30fa830272e8d3f8943dea3dcd8f153f611336ee08c147b7cfa1114e" Feb 19 10:12:40 crc kubenswrapper[4811]: I0219 10:12:40.155266 4811 scope.go:117] "RemoveContainer" containerID="aa02292d3821189d1853af5810111e4fc0808ee963d2effbe4cd2ac181cbe891" Feb 19 10:12:40 crc kubenswrapper[4811]: E0219 10:12:40.155757 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa02292d3821189d1853af5810111e4fc0808ee963d2effbe4cd2ac181cbe891\": container with ID starting with aa02292d3821189d1853af5810111e4fc0808ee963d2effbe4cd2ac181cbe891 not found: ID does not exist" containerID="aa02292d3821189d1853af5810111e4fc0808ee963d2effbe4cd2ac181cbe891" Feb 19 10:12:40 crc kubenswrapper[4811]: I0219 10:12:40.155816 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa02292d3821189d1853af5810111e4fc0808ee963d2effbe4cd2ac181cbe891"} err="failed to get container status \"aa02292d3821189d1853af5810111e4fc0808ee963d2effbe4cd2ac181cbe891\": rpc error: code = NotFound desc = could not find container \"aa02292d3821189d1853af5810111e4fc0808ee963d2effbe4cd2ac181cbe891\": container with ID starting with aa02292d3821189d1853af5810111e4fc0808ee963d2effbe4cd2ac181cbe891 not found: ID does not exist" Feb 19 10:12:40 crc kubenswrapper[4811]: I0219 10:12:40.155847 4811 scope.go:117] "RemoveContainer" containerID="9968c953ef73f432478604b861bd1118775f858ce63c33236d4a80f4b30593e0" Feb 19 10:12:40 crc kubenswrapper[4811]: E0219 10:12:40.156347 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9968c953ef73f432478604b861bd1118775f858ce63c33236d4a80f4b30593e0\": container with ID starting with 9968c953ef73f432478604b861bd1118775f858ce63c33236d4a80f4b30593e0 not found: ID does not exist" containerID="9968c953ef73f432478604b861bd1118775f858ce63c33236d4a80f4b30593e0" Feb 19 10:12:40 crc kubenswrapper[4811]: I0219 10:12:40.156383 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9968c953ef73f432478604b861bd1118775f858ce63c33236d4a80f4b30593e0"} err="failed to get container status \"9968c953ef73f432478604b861bd1118775f858ce63c33236d4a80f4b30593e0\": rpc error: code = NotFound desc = could not find container \"9968c953ef73f432478604b861bd1118775f858ce63c33236d4a80f4b30593e0\": container with ID starting with 9968c953ef73f432478604b861bd1118775f858ce63c33236d4a80f4b30593e0 not found: ID does not exist" Feb 19 10:12:40 crc kubenswrapper[4811]: I0219 10:12:40.156405 4811 scope.go:117] "RemoveContainer" containerID="9991986c30fa830272e8d3f8943dea3dcd8f153f611336ee08c147b7cfa1114e" Feb 19 10:12:40 crc kubenswrapper[4811]: E0219 10:12:40.156680 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9991986c30fa830272e8d3f8943dea3dcd8f153f611336ee08c147b7cfa1114e\": container with ID starting with 9991986c30fa830272e8d3f8943dea3dcd8f153f611336ee08c147b7cfa1114e not found: ID does not exist" containerID="9991986c30fa830272e8d3f8943dea3dcd8f153f611336ee08c147b7cfa1114e" Feb 19 10:12:40 crc kubenswrapper[4811]: I0219 10:12:40.156702 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9991986c30fa830272e8d3f8943dea3dcd8f153f611336ee08c147b7cfa1114e"} err="failed to get container status \"9991986c30fa830272e8d3f8943dea3dcd8f153f611336ee08c147b7cfa1114e\": rpc error: code = NotFound desc = could not find container \"9991986c30fa830272e8d3f8943dea3dcd8f153f611336ee08c147b7cfa1114e\": container with ID starting with 9991986c30fa830272e8d3f8943dea3dcd8f153f611336ee08c147b7cfa1114e not found: ID does not exist" Feb 19 10:12:40 crc kubenswrapper[4811]: I0219 10:12:40.292711 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 10:12:40 crc kubenswrapper[4811]: I0219 10:12:40.293255 4811 status_manager.go:851] "Failed to get status for pod" podUID="ca27a1be-f8a5-447f-b61f-da206a041290" pod="openshift-marketplace/certified-operators-ksjbn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ksjbn\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:40 crc kubenswrapper[4811]: I0219 10:12:40.293817 4811 status_manager.go:851] "Failed to get status for pod" podUID="c4c2ec9c-52c4-442e-adf8-eddb1d74d250" pod="openshift-marketplace/community-operators-k4gqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k4gqt\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:40 crc kubenswrapper[4811]: I0219 10:12:40.294388 4811 status_manager.go:851] "Failed to get status for pod" podUID="1fb8f251-0f82-4f32-a893-0c56c5c08733" pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5bf578b4ff-q4v2n\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:40 crc kubenswrapper[4811]: I0219 10:12:40.294651 4811 status_manager.go:851] "Failed to get status for pod" podUID="f528e872-8fd6-4cfc-b4d5-129433fed8f6" pod="openshift-marketplace/redhat-operators-zq68z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zq68z\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:40 crc kubenswrapper[4811]: I0219 10:12:40.294882 4811 status_manager.go:851] "Failed to get status for pod" podUID="7e395259-92d6-4fa4-8eb1-da9246b3626b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:40 crc kubenswrapper[4811]: E0219 10:12:40.411076 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="1.6s" Feb 19 10:12:40 crc kubenswrapper[4811]: I0219 10:12:40.424917 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e395259-92d6-4fa4-8eb1-da9246b3626b-kube-api-access\") pod \"7e395259-92d6-4fa4-8eb1-da9246b3626b\" (UID: \"7e395259-92d6-4fa4-8eb1-da9246b3626b\") " Feb 19 10:12:40 crc kubenswrapper[4811]: I0219 10:12:40.425086 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7e395259-92d6-4fa4-8eb1-da9246b3626b-var-lock\") pod \"7e395259-92d6-4fa4-8eb1-da9246b3626b\" (UID: \"7e395259-92d6-4fa4-8eb1-da9246b3626b\") " Feb 19 10:12:40 crc kubenswrapper[4811]: I0219 10:12:40.425116 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e395259-92d6-4fa4-8eb1-da9246b3626b-kubelet-dir\") pod \"7e395259-92d6-4fa4-8eb1-da9246b3626b\" (UID: \"7e395259-92d6-4fa4-8eb1-da9246b3626b\") " Feb 19 10:12:40 crc kubenswrapper[4811]: I0219 10:12:40.425237 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e395259-92d6-4fa4-8eb1-da9246b3626b-var-lock" (OuterVolumeSpecName: "var-lock") pod "7e395259-92d6-4fa4-8eb1-da9246b3626b" (UID: "7e395259-92d6-4fa4-8eb1-da9246b3626b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:12:40 crc kubenswrapper[4811]: I0219 10:12:40.425318 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e395259-92d6-4fa4-8eb1-da9246b3626b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7e395259-92d6-4fa4-8eb1-da9246b3626b" (UID: "7e395259-92d6-4fa4-8eb1-da9246b3626b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:12:40 crc kubenswrapper[4811]: I0219 10:12:40.425469 4811 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7e395259-92d6-4fa4-8eb1-da9246b3626b-var-lock\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:40 crc kubenswrapper[4811]: I0219 10:12:40.425492 4811 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e395259-92d6-4fa4-8eb1-da9246b3626b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:40 crc kubenswrapper[4811]: I0219 10:12:40.428085 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e395259-92d6-4fa4-8eb1-da9246b3626b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7e395259-92d6-4fa4-8eb1-da9246b3626b" (UID: "7e395259-92d6-4fa4-8eb1-da9246b3626b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:12:40 crc kubenswrapper[4811]: I0219 10:12:40.526795 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e395259-92d6-4fa4-8eb1-da9246b3626b-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:41 crc kubenswrapper[4811]: I0219 10:12:41.118756 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 10:12:41 crc kubenswrapper[4811]: I0219 10:12:41.119598 4811 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53" exitCode=0 Feb 19 10:12:41 crc kubenswrapper[4811]: I0219 10:12:41.119888 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e012a127f547eeab4343506da2f823aba95d472a8feee73f3954c9662008165" Feb 19 10:12:41 crc kubenswrapper[4811]: I0219 10:12:41.121459 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7e395259-92d6-4fa4-8eb1-da9246b3626b","Type":"ContainerDied","Data":"da0dd4bd2ecee080f9cf9ec62ac443fe2a970c63fa96b4576644122d8f4fe6f2"} Feb 19 10:12:41 crc kubenswrapper[4811]: I0219 10:12:41.121515 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da0dd4bd2ecee080f9cf9ec62ac443fe2a970c63fa96b4576644122d8f4fe6f2" Feb 19 10:12:41 crc kubenswrapper[4811]: I0219 10:12:41.121545 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 10:12:41 crc kubenswrapper[4811]: I0219 10:12:41.130423 4811 status_manager.go:851] "Failed to get status for pod" podUID="f528e872-8fd6-4cfc-b4d5-129433fed8f6" pod="openshift-marketplace/redhat-operators-zq68z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zq68z\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:41 crc kubenswrapper[4811]: I0219 10:12:41.130855 4811 status_manager.go:851] "Failed to get status for pod" podUID="7e395259-92d6-4fa4-8eb1-da9246b3626b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:41 crc kubenswrapper[4811]: I0219 10:12:41.131174 4811 status_manager.go:851] "Failed to get status for pod" podUID="ca27a1be-f8a5-447f-b61f-da206a041290" pod="openshift-marketplace/certified-operators-ksjbn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ksjbn\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:41 crc kubenswrapper[4811]: I0219 10:12:41.131519 4811 status_manager.go:851] "Failed to get status for pod" podUID="c4c2ec9c-52c4-442e-adf8-eddb1d74d250" pod="openshift-marketplace/community-operators-k4gqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k4gqt\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:41 crc kubenswrapper[4811]: I0219 10:12:41.131982 4811 status_manager.go:851] "Failed to get status for pod" podUID="1fb8f251-0f82-4f32-a893-0c56c5c08733" pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5bf578b4ff-q4v2n\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:41 crc kubenswrapper[4811]: I0219 10:12:41.134193 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 10:12:41 crc kubenswrapper[4811]: I0219 10:12:41.134973 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:12:41 crc kubenswrapper[4811]: I0219 10:12:41.135509 4811 status_manager.go:851] "Failed to get status for pod" podUID="f528e872-8fd6-4cfc-b4d5-129433fed8f6" pod="openshift-marketplace/redhat-operators-zq68z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zq68z\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:41 crc kubenswrapper[4811]: I0219 10:12:41.135771 4811 status_manager.go:851] "Failed to get status for pod" podUID="7e395259-92d6-4fa4-8eb1-da9246b3626b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:41 crc kubenswrapper[4811]: I0219 10:12:41.136063 4811 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:41 crc kubenswrapper[4811]: I0219 10:12:41.136329 4811 status_manager.go:851] "Failed to get status for pod" podUID="ca27a1be-f8a5-447f-b61f-da206a041290" pod="openshift-marketplace/certified-operators-ksjbn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ksjbn\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:41 crc kubenswrapper[4811]: I0219 10:12:41.136596 4811 status_manager.go:851] "Failed to get status for pod" podUID="c4c2ec9c-52c4-442e-adf8-eddb1d74d250" pod="openshift-marketplace/community-operators-k4gqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k4gqt\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:41 crc kubenswrapper[4811]: I0219 10:12:41.136922 4811 status_manager.go:851] "Failed to get status for pod" podUID="1fb8f251-0f82-4f32-a893-0c56c5c08733" pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5bf578b4ff-q4v2n\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:41 crc kubenswrapper[4811]: I0219 10:12:41.243358 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 10:12:41 crc kubenswrapper[4811]: I0219 10:12:41.243392 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 10:12:41 crc kubenswrapper[4811]: I0219 10:12:41.243472 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:12:41 crc kubenswrapper[4811]: I0219 10:12:41.243493 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 10:12:41 crc kubenswrapper[4811]: I0219 10:12:41.243563 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:12:41 crc kubenswrapper[4811]: I0219 10:12:41.243610 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:12:41 crc kubenswrapper[4811]: I0219 10:12:41.244054 4811 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:41 crc kubenswrapper[4811]: I0219 10:12:41.244078 4811 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:41 crc kubenswrapper[4811]: I0219 10:12:41.244088 4811 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 19 10:12:42 crc kubenswrapper[4811]: E0219 10:12:42.011492 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="3.2s" Feb 19 10:12:42 crc kubenswrapper[4811]: I0219 10:12:42.127119 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:12:42 crc kubenswrapper[4811]: I0219 10:12:42.141506 4811 status_manager.go:851] "Failed to get status for pod" podUID="7e395259-92d6-4fa4-8eb1-da9246b3626b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:42 crc kubenswrapper[4811]: I0219 10:12:42.141914 4811 status_manager.go:851] "Failed to get status for pod" podUID="f528e872-8fd6-4cfc-b4d5-129433fed8f6" pod="openshift-marketplace/redhat-operators-zq68z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zq68z\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:42 crc kubenswrapper[4811]: I0219 10:12:42.142153 4811 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:42 crc kubenswrapper[4811]: I0219 10:12:42.142534 4811 status_manager.go:851] "Failed to get status for pod" podUID="ca27a1be-f8a5-447f-b61f-da206a041290" pod="openshift-marketplace/certified-operators-ksjbn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ksjbn\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:42 crc kubenswrapper[4811]: I0219 10:12:42.142827 4811 status_manager.go:851] "Failed to get status for pod" podUID="c4c2ec9c-52c4-442e-adf8-eddb1d74d250" pod="openshift-marketplace/community-operators-k4gqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k4gqt\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:42 crc kubenswrapper[4811]: I0219 10:12:42.143121 4811 status_manager.go:851] "Failed to get status for pod" podUID="1fb8f251-0f82-4f32-a893-0c56c5c08733" pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5bf578b4ff-q4v2n\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:42 crc kubenswrapper[4811]: I0219 10:12:42.586281 4811 status_manager.go:851] "Failed to get status for pod" podUID="7e395259-92d6-4fa4-8eb1-da9246b3626b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:42 crc kubenswrapper[4811]: I0219 10:12:42.586589 4811 status_manager.go:851] "Failed to get status for pod" podUID="f528e872-8fd6-4cfc-b4d5-129433fed8f6" pod="openshift-marketplace/redhat-operators-zq68z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zq68z\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:42 crc kubenswrapper[4811]: I0219 10:12:42.586910 4811 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:42 crc kubenswrapper[4811]: I0219 10:12:42.587193 4811 status_manager.go:851] "Failed to get status for pod" podUID="ca27a1be-f8a5-447f-b61f-da206a041290" pod="openshift-marketplace/certified-operators-ksjbn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ksjbn\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:42 crc kubenswrapper[4811]: I0219 10:12:42.587499 4811 status_manager.go:851] "Failed to get status for pod" podUID="c4c2ec9c-52c4-442e-adf8-eddb1d74d250" pod="openshift-marketplace/community-operators-k4gqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k4gqt\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:42 crc kubenswrapper[4811]: I0219 10:12:42.587743 4811 status_manager.go:851] "Failed to get status for pod" podUID="1fb8f251-0f82-4f32-a893-0c56c5c08733" pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5bf578b4ff-q4v2n\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:42 crc kubenswrapper[4811]: I0219 10:12:42.589470 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 19 10:12:42 crc kubenswrapper[4811]: E0219 10:12:42.965899 4811 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.194:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-k4gqt.18959e30629781e4 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-k4gqt,UID:c4c2ec9c-52c4-442e-adf8-eddb1d74d250,APIVersion:v1,ResourceVersion:27910,FieldPath:spec.containers{registry-server},},Reason:Killing,Message:Stopping container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 10:12:39.08987338 +0000 UTC m=+247.616338066,LastTimestamp:2026-02-19 10:12:39.08987338 +0000 UTC m=+247.616338066,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 10:12:43 crc kubenswrapper[4811]: E0219 10:12:43.796422 4811 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.194:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 10:12:43 crc kubenswrapper[4811]: I0219 10:12:43.797044 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 10:12:43 crc kubenswrapper[4811]: W0219 10:12:43.828600 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-2cfd84ee8bf481f7e8ac4ec144af760fc1ecdf47638a483f9015d3c5123aa479 WatchSource:0}: Error finding container 2cfd84ee8bf481f7e8ac4ec144af760fc1ecdf47638a483f9015d3c5123aa479: Status 404 returned error can't find the container with id 2cfd84ee8bf481f7e8ac4ec144af760fc1ecdf47638a483f9015d3c5123aa479 Feb 19 10:12:44 crc kubenswrapper[4811]: I0219 10:12:44.141183 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"1595a7f0a0f6f04a34a1059cb7463840021c0abca1375ec3d99bc79a4d769275"} Feb 19 10:12:44 crc kubenswrapper[4811]: I0219 10:12:44.141235 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"2cfd84ee8bf481f7e8ac4ec144af760fc1ecdf47638a483f9015d3c5123aa479"} Feb 19 10:12:44 crc kubenswrapper[4811]: I0219 10:12:44.141840 4811 status_manager.go:851] "Failed to get status for pod" podUID="f528e872-8fd6-4cfc-b4d5-129433fed8f6" pod="openshift-marketplace/redhat-operators-zq68z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zq68z\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:44 crc kubenswrapper[4811]: E0219 10:12:44.142018 4811 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.194:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 10:12:44 crc kubenswrapper[4811]: I0219 10:12:44.142142 4811 status_manager.go:851] "Failed to get status for pod" podUID="7e395259-92d6-4fa4-8eb1-da9246b3626b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:44 crc kubenswrapper[4811]: I0219 10:12:44.142759 4811 status_manager.go:851] "Failed to get status for pod" podUID="ca27a1be-f8a5-447f-b61f-da206a041290" pod="openshift-marketplace/certified-operators-ksjbn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ksjbn\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:44 crc kubenswrapper[4811]: I0219 10:12:44.143018 4811 status_manager.go:851] "Failed to get status for pod" podUID="c4c2ec9c-52c4-442e-adf8-eddb1d74d250" pod="openshift-marketplace/community-operators-k4gqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k4gqt\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:44 crc kubenswrapper[4811]: I0219 10:12:44.143316 4811 status_manager.go:851] "Failed to get status for pod" podUID="1fb8f251-0f82-4f32-a893-0c56c5c08733" pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5bf578b4ff-q4v2n\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:45 crc kubenswrapper[4811]: E0219 10:12:45.212835 4811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="6.4s" Feb 19 10:12:49 crc kubenswrapper[4811]: E0219 10:12:49.778369 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:12:49Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:12:49Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:12:49Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T10:12:49Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:0dcba5d04f25f6e382ffecdd94057bd8a99cffb6a00a8c7da186e9871ae459ea\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:92f996986deaacc20f2d7929be6465ef80f234c7c73757735ab489489ad69464\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1702667973},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:36d5e112bc7eb9123ef90a1acf77172cfc399f38288a5ea7de142760e8205d3e\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:42adcd31d393f3ca38b294ea506d3c0ed3d3af2db3e0a28b99e17b3e15407989\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1232883390},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:1216c1149e60c19986daabe98d95d067b485439895e2ff2854d94db8824f3715\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:7aa8c7b527036d45810192b5cc55665e1b5225be95abc5047d29828f793b60d0\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1210186952},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:518982b9ad8a8bfb7bb3b4216b235cac99e126df3bb48e390b36064560c76b83\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b3293b04e31c8e67c885f77e0ad2ee994295afde7c42cb9761c7090ae0cdb3f8\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1202767548},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:49 crc kubenswrapper[4811]: E0219 10:12:49.779994 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:49 crc kubenswrapper[4811]: E0219 10:12:49.780258 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:49 crc kubenswrapper[4811]: E0219 10:12:49.780579 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:49 crc kubenswrapper[4811]: E0219 10:12:49.780853 4811 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:49 crc kubenswrapper[4811]: E0219 10:12:49.780871 4811 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 10:12:50 crc kubenswrapper[4811]: I0219 10:12:50.582771 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:12:50 crc kubenswrapper[4811]: I0219 10:12:50.583619 4811 status_manager.go:851] "Failed to get status for pod" podUID="f528e872-8fd6-4cfc-b4d5-129433fed8f6" pod="openshift-marketplace/redhat-operators-zq68z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zq68z\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:50 crc kubenswrapper[4811]: I0219 10:12:50.584091 4811 status_manager.go:851] "Failed to get status for pod" podUID="7e395259-92d6-4fa4-8eb1-da9246b3626b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:50 crc kubenswrapper[4811]: I0219 10:12:50.584334 4811 status_manager.go:851] "Failed to get status for pod" podUID="ca27a1be-f8a5-447f-b61f-da206a041290" pod="openshift-marketplace/certified-operators-ksjbn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ksjbn\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:50 crc kubenswrapper[4811]: I0219 10:12:50.584586 4811 status_manager.go:851] "Failed to get status for pod" podUID="c4c2ec9c-52c4-442e-adf8-eddb1d74d250" pod="openshift-marketplace/community-operators-k4gqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k4gqt\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:50 crc kubenswrapper[4811]: I0219 10:12:50.584825 4811 status_manager.go:851] "Failed to get status for pod" podUID="1fb8f251-0f82-4f32-a893-0c56c5c08733" pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5bf578b4ff-q4v2n\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:50 crc kubenswrapper[4811]: I0219 10:12:50.597819 4811 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="62cc56d1-ce3a-4458-a5b5-de646468654f" Feb 19 10:12:50 crc kubenswrapper[4811]: I0219 10:12:50.598075 4811 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="62cc56d1-ce3a-4458-a5b5-de646468654f" Feb 19 10:12:50 crc kubenswrapper[4811]: E0219 10:12:50.598546 4811 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:12:50 crc kubenswrapper[4811]: I0219 10:12:50.599132 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:12:50 crc kubenswrapper[4811]: W0219 10:12:50.619311 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-724a32b2dbf7db6035cfc48b81125cc01e921e49a9ec94333110cb86fca72cc2 WatchSource:0}: Error finding container 724a32b2dbf7db6035cfc48b81125cc01e921e49a9ec94333110cb86fca72cc2: Status 404 returned error can't find the container with id 724a32b2dbf7db6035cfc48b81125cc01e921e49a9ec94333110cb86fca72cc2 Feb 19 10:12:51 crc kubenswrapper[4811]: I0219 10:12:51.188254 4811 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="4e0d5eac3447cc143436f96fafdace42e04aa858600ac2f8831ee286b58c3b4e" exitCode=0 Feb 19 10:12:51 crc kubenswrapper[4811]: I0219 10:12:51.188378 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"4e0d5eac3447cc143436f96fafdace42e04aa858600ac2f8831ee286b58c3b4e"} Feb 19 10:12:51 crc kubenswrapper[4811]: I0219 10:12:51.188630 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"724a32b2dbf7db6035cfc48b81125cc01e921e49a9ec94333110cb86fca72cc2"} Feb 19 10:12:51 crc kubenswrapper[4811]: I0219 10:12:51.188934 4811 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="62cc56d1-ce3a-4458-a5b5-de646468654f" Feb 19 10:12:51 crc kubenswrapper[4811]: I0219 10:12:51.188949 4811 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="62cc56d1-ce3a-4458-a5b5-de646468654f" Feb 19 10:12:51 crc kubenswrapper[4811]: E0219 10:12:51.189385 4811 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:12:51 crc kubenswrapper[4811]: I0219 10:12:51.189528 4811 status_manager.go:851] "Failed to get status for pod" podUID="f528e872-8fd6-4cfc-b4d5-129433fed8f6" pod="openshift-marketplace/redhat-operators-zq68z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zq68z\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:51 crc kubenswrapper[4811]: I0219 10:12:51.190034 4811 status_manager.go:851] "Failed to get status for pod" podUID="7e395259-92d6-4fa4-8eb1-da9246b3626b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:51 crc kubenswrapper[4811]: I0219 10:12:51.190269 4811 status_manager.go:851] "Failed to get status for pod" podUID="ca27a1be-f8a5-447f-b61f-da206a041290" pod="openshift-marketplace/certified-operators-ksjbn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ksjbn\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:51 crc kubenswrapper[4811]: I0219 10:12:51.190608 4811 status_manager.go:851] "Failed to get status for pod" podUID="c4c2ec9c-52c4-442e-adf8-eddb1d74d250" pod="openshift-marketplace/community-operators-k4gqt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k4gqt\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:51 crc kubenswrapper[4811]: I0219 10:12:51.191079 4811 status_manager.go:851] "Failed to get status for pod" podUID="1fb8f251-0f82-4f32-a893-0c56c5c08733" pod="openshift-authentication/oauth-openshift-5bf578b4ff-q4v2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5bf578b4ff-q4v2n\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 19 10:12:52 crc kubenswrapper[4811]: I0219 10:12:52.198279 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6ee8f91ddb568cd923c5f41b6ae1eaae12429bffea8c0e4a7d2ba13f817382ea"} Feb 19 10:12:52 crc kubenswrapper[4811]: I0219 10:12:52.198329 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8f934ef4eb2acaefdbea511364c3b64ae5c9ed724a7f3fe9ade89ba10d501085"} Feb 19 10:12:53 crc kubenswrapper[4811]: I0219 10:12:53.220803 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 19 10:12:53 crc kubenswrapper[4811]: I0219 10:12:53.221116 4811 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620" exitCode=1 Feb 19 10:12:53 crc kubenswrapper[4811]: I0219 10:12:53.221180 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620"} Feb 19 10:12:53 crc kubenswrapper[4811]: I0219 10:12:53.221680 4811 scope.go:117] "RemoveContainer" containerID="b7285b9353fdd0561944f83398a957dc8cb0da871980dc9af4051b0b77c1e620" Feb 19 10:12:53 crc kubenswrapper[4811]: I0219 10:12:53.226024 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ac94b991860ef2a7a8ededd8a9ea59904ea9ef8e61350d307f298d2ad2ca2a75"} Feb 19 10:12:53 crc kubenswrapper[4811]: I0219 10:12:53.226067 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"bcb92787aef10546006a7a32cc3131efb7d8fe1e6a1a97376f2eebf738a0d52d"} Feb 19 10:12:53 crc kubenswrapper[4811]: I0219 10:12:53.226079 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f14bc92097e64085a11e9ff8f80cfcac4b4811d833ba54d9fd0e6cbd4833106c"} Feb 19 10:12:53 crc kubenswrapper[4811]: I0219 10:12:53.226400 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:12:53 crc kubenswrapper[4811]: I0219 10:12:53.226502 4811 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="62cc56d1-ce3a-4458-a5b5-de646468654f" Feb 19 10:12:53 crc kubenswrapper[4811]: I0219 10:12:53.226526 4811 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="62cc56d1-ce3a-4458-a5b5-de646468654f" Feb 19 10:12:54 crc kubenswrapper[4811]: I0219 10:12:54.234665 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 19 10:12:54 crc kubenswrapper[4811]: I0219 10:12:54.235051 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"726c5ca3d14628027fbb7a135031e87cd15a6dfcf319fc6e0c90da99bc20910e"} Feb 19 10:12:54 crc kubenswrapper[4811]: I0219 10:12:54.979787 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 10:12:54 crc kubenswrapper[4811]: I0219 10:12:54.979977 4811 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 19 10:12:54 crc kubenswrapper[4811]: I0219 10:12:54.980030 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 19 10:12:55 crc kubenswrapper[4811]: I0219 10:12:55.600089 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:12:55 crc kubenswrapper[4811]: I0219 10:12:55.600219 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:12:55 crc kubenswrapper[4811]: I0219 10:12:55.605413 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:12:58 crc kubenswrapper[4811]: I0219 10:12:58.244547 4811 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:12:59 crc kubenswrapper[4811]: I0219 10:12:59.262414 4811 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="62cc56d1-ce3a-4458-a5b5-de646468654f" Feb 19 10:12:59 crc kubenswrapper[4811]: I0219 10:12:59.262466 4811 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="62cc56d1-ce3a-4458-a5b5-de646468654f" Feb 19 10:12:59 crc kubenswrapper[4811]: I0219 10:12:59.267969 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:12:59 crc kubenswrapper[4811]: I0219 10:12:59.273072 4811 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f77d55e6-4559-4b68-ada1-795060b7b5f0" Feb 19 10:12:59 crc kubenswrapper[4811]: I0219 10:12:59.830651 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 10:13:00 crc kubenswrapper[4811]: I0219 10:13:00.269080 4811 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="62cc56d1-ce3a-4458-a5b5-de646468654f" Feb 19 10:13:00 crc kubenswrapper[4811]: I0219 10:13:00.269113 4811 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="62cc56d1-ce3a-4458-a5b5-de646468654f" Feb 19 10:13:02 crc kubenswrapper[4811]: I0219 10:13:02.605927 4811 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f77d55e6-4559-4b68-ada1-795060b7b5f0" Feb 19 10:13:04 crc kubenswrapper[4811]: I0219 10:13:04.980222 4811 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 19 10:13:04 crc kubenswrapper[4811]: I0219 10:13:04.981007 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 19 10:13:08 crc kubenswrapper[4811]: I0219 10:13:08.327376 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 19 10:13:09 crc kubenswrapper[4811]: I0219 10:13:09.871085 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 19 10:13:10 crc kubenswrapper[4811]: I0219 10:13:10.224349 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 19 10:13:10 crc kubenswrapper[4811]: I0219 10:13:10.426905 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 19 10:13:10 crc kubenswrapper[4811]: I0219 10:13:10.454011 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 19 10:13:10 crc kubenswrapper[4811]: I0219 10:13:10.461932 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 19 10:13:10 crc kubenswrapper[4811]: I0219 10:13:10.570171 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 19 10:13:10 crc kubenswrapper[4811]: I0219 10:13:10.597957 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 19 10:13:10 crc kubenswrapper[4811]: I0219 10:13:10.645288 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 19 10:13:10 crc kubenswrapper[4811]: I0219 10:13:10.877085 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 19 10:13:10 crc kubenswrapper[4811]: I0219 10:13:10.931138 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 19 10:13:10 crc kubenswrapper[4811]: I0219 10:13:10.954208 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 19 10:13:11 crc kubenswrapper[4811]: I0219 10:13:11.027558 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 19 10:13:11 crc kubenswrapper[4811]: I0219 10:13:11.108215 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 10:13:11 crc kubenswrapper[4811]: I0219 10:13:11.557527 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 19 10:13:11 crc kubenswrapper[4811]: I0219 10:13:11.680752 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 19 10:13:11 crc kubenswrapper[4811]: I0219 10:13:11.866041 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 19 10:13:11 crc kubenswrapper[4811]: I0219 10:13:11.929414 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 10:13:11 crc kubenswrapper[4811]: I0219 10:13:11.986989 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 19 10:13:12 crc kubenswrapper[4811]: I0219 10:13:12.069624 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 19 10:13:12 crc kubenswrapper[4811]: I0219 10:13:12.175682 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 19 10:13:12 crc kubenswrapper[4811]: I0219 10:13:12.303720 4811 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 19 10:13:12 crc kubenswrapper[4811]: I0219 10:13:12.315509 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 19 10:13:12 crc kubenswrapper[4811]: I0219 10:13:12.319928 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 19 10:13:12 crc kubenswrapper[4811]: I0219 10:13:12.335353 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 19 10:13:12 crc kubenswrapper[4811]: I0219 10:13:12.451181 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 19 10:13:12 crc kubenswrapper[4811]: I0219 10:13:12.559637 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 19 10:13:12 crc kubenswrapper[4811]: I0219 10:13:12.606969 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 19 10:13:12 crc kubenswrapper[4811]: I0219 10:13:12.751418 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 19 10:13:12 crc kubenswrapper[4811]: I0219 10:13:12.871113 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 19 10:13:12 crc kubenswrapper[4811]: I0219 10:13:12.938856 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 19 10:13:12 crc kubenswrapper[4811]: I0219 10:13:12.964101 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 19 10:13:13 crc kubenswrapper[4811]: I0219 10:13:13.042280 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 19 10:13:13 crc kubenswrapper[4811]: I0219 10:13:13.109640 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 19 10:13:13 crc kubenswrapper[4811]: I0219 10:13:13.161601 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 19 10:13:13 crc kubenswrapper[4811]: I0219 10:13:13.195755 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 10:13:13 crc kubenswrapper[4811]: I0219 10:13:13.216248 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 19 10:13:13 crc kubenswrapper[4811]: I0219 10:13:13.217120 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 19 10:13:13 crc kubenswrapper[4811]: I0219 10:13:13.282231 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 19 10:13:13 crc kubenswrapper[4811]: I0219 10:13:13.290225 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 19 10:13:13 crc kubenswrapper[4811]: I0219 10:13:13.348724 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 19 10:13:13 crc kubenswrapper[4811]: I0219 10:13:13.607248 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 19 10:13:13 crc kubenswrapper[4811]: I0219 10:13:13.686338 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 19 10:13:13 crc kubenswrapper[4811]: I0219 10:13:13.769673 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 19 10:13:13 crc kubenswrapper[4811]: I0219 10:13:13.770883 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 19 10:13:13 crc kubenswrapper[4811]: I0219 10:13:13.870297 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 19 10:13:13 crc kubenswrapper[4811]: I0219 10:13:13.952767 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 10:13:13 crc kubenswrapper[4811]: I0219 10:13:13.971891 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 19 10:13:13 crc kubenswrapper[4811]: I0219 10:13:13.977687 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 19 10:13:13 crc kubenswrapper[4811]: I0219 10:13:13.990439 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 19 10:13:14 crc kubenswrapper[4811]: I0219 10:13:14.011846 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 19 10:13:14 crc kubenswrapper[4811]: I0219 10:13:14.018627 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 19 10:13:14 crc kubenswrapper[4811]: I0219 10:13:14.061718 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 19 10:13:14 crc kubenswrapper[4811]: I0219 10:13:14.135383 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 19 10:13:14 crc kubenswrapper[4811]: I0219 10:13:14.204758 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 10:13:14 crc kubenswrapper[4811]: I0219 10:13:14.302439 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 19 10:13:14 crc kubenswrapper[4811]: I0219 10:13:14.330147 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 19 10:13:14 crc kubenswrapper[4811]: I0219 10:13:14.505090 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 19 10:13:14 crc kubenswrapper[4811]: I0219 10:13:14.588639 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 19 10:13:14 crc kubenswrapper[4811]: I0219 10:13:14.631712 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 19 10:13:14 crc kubenswrapper[4811]: I0219 10:13:14.632675 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 19 10:13:14 crc kubenswrapper[4811]: I0219 10:13:14.749812 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 19 10:13:14 crc kubenswrapper[4811]: I0219 10:13:14.764438 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 19 10:13:14 crc kubenswrapper[4811]: I0219 10:13:14.789780 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 19 10:13:14 crc kubenswrapper[4811]: I0219 10:13:14.798744 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 19 10:13:14 crc kubenswrapper[4811]: I0219 10:13:14.802421 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 19 10:13:14 crc kubenswrapper[4811]: I0219 10:13:14.804634 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 19 10:13:14 crc kubenswrapper[4811]: I0219 10:13:14.892505 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 19 10:13:14 crc kubenswrapper[4811]: I0219 10:13:14.893334 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 19 10:13:14 crc kubenswrapper[4811]: I0219 10:13:14.901758 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 19 10:13:14 crc kubenswrapper[4811]: I0219 10:13:14.984879 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 10:13:14 crc kubenswrapper[4811]: I0219 10:13:14.989001 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 10:13:15 crc kubenswrapper[4811]: I0219 10:13:15.051105 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 19 10:13:15 crc kubenswrapper[4811]: I0219 10:13:15.101537 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 19 10:13:15 crc kubenswrapper[4811]: I0219 10:13:15.150980 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 19 10:13:15 crc kubenswrapper[4811]: I0219 10:13:15.245169 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 19 10:13:15 crc kubenswrapper[4811]: I0219 10:13:15.268920 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 19 10:13:15 crc kubenswrapper[4811]: I0219 10:13:15.317957 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 19 10:13:15 crc kubenswrapper[4811]: I0219 10:13:15.389796 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 19 10:13:15 crc kubenswrapper[4811]: I0219 10:13:15.447416 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 19 10:13:15 crc kubenswrapper[4811]: I0219 10:13:15.526822 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 10:13:15 crc kubenswrapper[4811]: I0219 10:13:15.527461 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 19 10:13:15 crc kubenswrapper[4811]: I0219 10:13:15.658393 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 19 10:13:15 crc kubenswrapper[4811]: I0219 10:13:15.762874 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 19 10:13:15 crc kubenswrapper[4811]: I0219 10:13:15.771047 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 19 10:13:15 crc kubenswrapper[4811]: I0219 10:13:15.773539 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 19 10:13:15 crc kubenswrapper[4811]: I0219 10:13:15.775547 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 19 10:13:15 crc kubenswrapper[4811]: I0219 10:13:15.843712 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 19 10:13:15 crc kubenswrapper[4811]: I0219 10:13:15.897367 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 19 10:13:16 crc kubenswrapper[4811]: I0219 10:13:16.019729 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 19 10:13:16 crc kubenswrapper[4811]: I0219 10:13:16.086292 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 19 10:13:16 crc kubenswrapper[4811]: I0219 10:13:16.158421 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 19 10:13:16 crc kubenswrapper[4811]: I0219 10:13:16.163469 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 10:13:16 crc kubenswrapper[4811]: I0219 10:13:16.180019 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 19 10:13:16 crc kubenswrapper[4811]: I0219 10:13:16.297432 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 19 10:13:16 crc kubenswrapper[4811]: I0219 10:13:16.306580 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 19 10:13:16 crc kubenswrapper[4811]: I0219 10:13:16.428802 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 10:13:16 crc kubenswrapper[4811]: I0219 10:13:16.445826 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 19 10:13:16 crc kubenswrapper[4811]: I0219 10:13:16.487890 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 19 10:13:16 crc kubenswrapper[4811]: I0219 10:13:16.525693 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 19 10:13:16 crc kubenswrapper[4811]: I0219 10:13:16.598977 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 19 10:13:16 crc kubenswrapper[4811]: I0219 10:13:16.634590 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 10:13:16 crc kubenswrapper[4811]: I0219 10:13:16.642416 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 19 10:13:16 crc kubenswrapper[4811]: I0219 10:13:16.684519 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 19 10:13:16 crc kubenswrapper[4811]: I0219 10:13:16.848956 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 19 10:13:16 crc kubenswrapper[4811]: I0219 10:13:16.861755 4811 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 19 10:13:16 crc kubenswrapper[4811]: I0219 10:13:16.869082 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ksjbn","openshift-marketplace/community-operators-k4gqt","openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 10:13:16 crc kubenswrapper[4811]: I0219 10:13:16.869178 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 10:13:16 crc kubenswrapper[4811]: I0219 10:13:16.876193 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 10:13:16 crc kubenswrapper[4811]: I0219 10:13:16.912073 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=18.91204731 podStartE2EDuration="18.91204731s" podCreationTimestamp="2026-02-19 10:12:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:13:16.893858393 +0000 UTC m=+285.420323079" watchObservedRunningTime="2026-02-19 10:13:16.91204731 +0000 UTC m=+285.438512006" Feb 19 10:13:16 crc kubenswrapper[4811]: I0219 10:13:16.920689 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 19 10:13:17 crc kubenswrapper[4811]: I0219 10:13:17.019051 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 19 10:13:17 crc kubenswrapper[4811]: I0219 10:13:17.050970 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 19 10:13:17 crc kubenswrapper[4811]: I0219 10:13:17.100328 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 19 10:13:17 crc kubenswrapper[4811]: I0219 10:13:17.165303 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 19 10:13:17 crc kubenswrapper[4811]: I0219 10:13:17.175749 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 19 10:13:17 crc kubenswrapper[4811]: I0219 10:13:17.302437 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 19 10:13:17 crc kubenswrapper[4811]: I0219 10:13:17.310007 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 19 10:13:17 crc kubenswrapper[4811]: I0219 10:13:17.353907 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 19 10:13:17 crc kubenswrapper[4811]: I0219 10:13:17.424641 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 19 10:13:17 crc kubenswrapper[4811]: I0219 10:13:17.435544 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 19 10:13:17 crc kubenswrapper[4811]: I0219 10:13:17.437413 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 19 10:13:17 crc kubenswrapper[4811]: I0219 10:13:17.462220 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 10:13:17 crc kubenswrapper[4811]: I0219 10:13:17.512908 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 10:13:17 crc kubenswrapper[4811]: I0219 10:13:17.518476 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 19 10:13:17 crc kubenswrapper[4811]: I0219 10:13:17.524627 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 19 10:13:17 crc kubenswrapper[4811]: I0219 10:13:17.549336 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 19 10:13:17 crc kubenswrapper[4811]: I0219 10:13:17.550863 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 19 10:13:17 crc kubenswrapper[4811]: I0219 10:13:17.571322 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 19 10:13:17 crc kubenswrapper[4811]: I0219 10:13:17.605799 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 19 10:13:17 crc kubenswrapper[4811]: I0219 10:13:17.686172 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 19 10:13:17 crc kubenswrapper[4811]: I0219 10:13:17.768776 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 19 10:13:17 crc kubenswrapper[4811]: I0219 10:13:17.784940 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 19 10:13:17 crc kubenswrapper[4811]: I0219 10:13:17.850900 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 19 10:13:17 crc kubenswrapper[4811]: I0219 10:13:17.856436 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 19 10:13:18 crc kubenswrapper[4811]: I0219 10:13:18.022353 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 19 10:13:18 crc kubenswrapper[4811]: I0219 10:13:18.023517 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 19 10:13:18 crc kubenswrapper[4811]: I0219 10:13:18.093115 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 19 10:13:18 crc kubenswrapper[4811]: I0219 10:13:18.119782 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 19 10:13:18 crc kubenswrapper[4811]: I0219 10:13:18.120312 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 10:13:18 crc kubenswrapper[4811]: I0219 10:13:18.153709 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 19 10:13:18 crc kubenswrapper[4811]: I0219 10:13:18.338972 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 19 10:13:18 crc kubenswrapper[4811]: I0219 10:13:18.364533 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 19 10:13:18 crc kubenswrapper[4811]: I0219 10:13:18.385605 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 19 10:13:18 crc kubenswrapper[4811]: I0219 10:13:18.440981 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 19 10:13:18 crc kubenswrapper[4811]: I0219 10:13:18.477096 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 19 10:13:18 crc kubenswrapper[4811]: I0219 10:13:18.529364 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 19 10:13:18 crc kubenswrapper[4811]: I0219 10:13:18.542735 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 19 10:13:18 crc kubenswrapper[4811]: I0219 10:13:18.592037 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 19 10:13:18 crc kubenswrapper[4811]: I0219 10:13:18.594801 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4c2ec9c-52c4-442e-adf8-eddb1d74d250" path="/var/lib/kubelet/pods/c4c2ec9c-52c4-442e-adf8-eddb1d74d250/volumes" Feb 19 10:13:18 crc kubenswrapper[4811]: I0219 10:13:18.596160 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca27a1be-f8a5-447f-b61f-da206a041290" path="/var/lib/kubelet/pods/ca27a1be-f8a5-447f-b61f-da206a041290/volumes" Feb 19 10:13:18 crc kubenswrapper[4811]: I0219 10:13:18.641391 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 19 10:13:18 crc kubenswrapper[4811]: I0219 10:13:18.742248 4811 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 19 10:13:18 crc kubenswrapper[4811]: I0219 10:13:18.763260 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 19 10:13:18 crc kubenswrapper[4811]: I0219 10:13:18.839804 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 19 10:13:18 crc kubenswrapper[4811]: I0219 10:13:18.841625 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 19 10:13:18 crc kubenswrapper[4811]: I0219 10:13:18.842511 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 19 10:13:18 crc kubenswrapper[4811]: I0219 10:13:18.854519 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 19 10:13:18 crc kubenswrapper[4811]: I0219 10:13:18.877622 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 19 10:13:18 crc kubenswrapper[4811]: I0219 10:13:18.901632 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 19 10:13:18 crc kubenswrapper[4811]: I0219 10:13:18.951179 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 19 10:13:19 crc kubenswrapper[4811]: I0219 10:13:19.003574 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 19 10:13:19 crc kubenswrapper[4811]: I0219 10:13:19.167024 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 19 10:13:19 crc kubenswrapper[4811]: I0219 10:13:19.190270 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 19 10:13:19 crc kubenswrapper[4811]: I0219 10:13:19.237808 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 19 10:13:19 crc kubenswrapper[4811]: I0219 10:13:19.306720 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 19 10:13:19 crc kubenswrapper[4811]: I0219 10:13:19.327100 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 19 10:13:19 crc kubenswrapper[4811]: I0219 10:13:19.351287 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 19 10:13:19 crc kubenswrapper[4811]: I0219 10:13:19.356318 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 19 10:13:19 crc kubenswrapper[4811]: I0219 10:13:19.381237 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 10:13:19 crc kubenswrapper[4811]: I0219 10:13:19.395005 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 19 10:13:19 crc kubenswrapper[4811]: I0219 10:13:19.443356 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 19 10:13:19 crc kubenswrapper[4811]: I0219 10:13:19.540542 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 19 10:13:19 crc kubenswrapper[4811]: I0219 10:13:19.701035 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 19 10:13:19 crc kubenswrapper[4811]: I0219 10:13:19.777074 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 19 10:13:19 crc kubenswrapper[4811]: I0219 10:13:19.824780 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 19 10:13:19 crc kubenswrapper[4811]: I0219 10:13:19.868371 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 19 10:13:19 crc kubenswrapper[4811]: I0219 10:13:19.890845 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 10:13:19 crc kubenswrapper[4811]: I0219 10:13:19.921132 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 19 10:13:19 crc kubenswrapper[4811]: I0219 10:13:19.957264 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 19 10:13:20 crc kubenswrapper[4811]: I0219 10:13:20.008068 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 19 10:13:20 crc kubenswrapper[4811]: I0219 10:13:20.030827 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 19 10:13:20 crc kubenswrapper[4811]: I0219 10:13:20.069663 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 19 10:13:20 crc kubenswrapper[4811]: I0219 10:13:20.125413 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 19 10:13:20 crc kubenswrapper[4811]: I0219 10:13:20.222586 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 19 10:13:20 crc kubenswrapper[4811]: I0219 10:13:20.290881 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 19 10:13:20 crc kubenswrapper[4811]: I0219 10:13:20.337015 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 19 10:13:20 crc kubenswrapper[4811]: I0219 10:13:20.358798 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 19 10:13:20 crc kubenswrapper[4811]: I0219 10:13:20.443342 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 19 10:13:20 crc kubenswrapper[4811]: I0219 10:13:20.466483 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 10:13:20 crc kubenswrapper[4811]: I0219 10:13:20.475686 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 19 10:13:20 crc kubenswrapper[4811]: I0219 10:13:20.488612 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 19 10:13:20 crc kubenswrapper[4811]: I0219 10:13:20.522380 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 19 10:13:20 crc kubenswrapper[4811]: I0219 10:13:20.542959 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 19 10:13:20 crc kubenswrapper[4811]: I0219 10:13:20.545750 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 19 10:13:20 crc kubenswrapper[4811]: I0219 10:13:20.562792 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 19 10:13:20 crc kubenswrapper[4811]: I0219 10:13:20.634996 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 19 10:13:20 crc kubenswrapper[4811]: I0219 10:13:20.666463 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 19 10:13:20 crc kubenswrapper[4811]: I0219 10:13:20.704257 4811 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 10:13:20 crc kubenswrapper[4811]: I0219 10:13:20.705014 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://1595a7f0a0f6f04a34a1059cb7463840021c0abca1375ec3d99bc79a4d769275" gracePeriod=5 Feb 19 10:13:20 crc kubenswrapper[4811]: I0219 10:13:20.811741 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 19 10:13:20 crc kubenswrapper[4811]: I0219 10:13:20.829539 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 19 10:13:20 crc kubenswrapper[4811]: I0219 10:13:20.980794 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 19 10:13:21 crc kubenswrapper[4811]: I0219 10:13:21.020355 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 19 10:13:21 crc kubenswrapper[4811]: I0219 10:13:21.062813 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 19 10:13:21 crc kubenswrapper[4811]: I0219 10:13:21.153837 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 19 10:13:21 crc kubenswrapper[4811]: I0219 10:13:21.231368 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 19 10:13:21 crc kubenswrapper[4811]: I0219 10:13:21.243518 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 19 10:13:21 crc kubenswrapper[4811]: I0219 10:13:21.368778 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 19 10:13:21 crc kubenswrapper[4811]: I0219 10:13:21.383877 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 19 10:13:21 crc kubenswrapper[4811]: I0219 10:13:21.385089 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 19 10:13:21 crc kubenswrapper[4811]: I0219 10:13:21.420661 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 19 10:13:21 crc kubenswrapper[4811]: I0219 10:13:21.521695 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 19 10:13:21 crc kubenswrapper[4811]: I0219 10:13:21.537765 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 19 10:13:21 crc kubenswrapper[4811]: I0219 10:13:21.538771 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 19 10:13:21 crc kubenswrapper[4811]: I0219 10:13:21.572176 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 19 10:13:21 crc kubenswrapper[4811]: I0219 10:13:21.656882 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 19 10:13:21 crc kubenswrapper[4811]: I0219 10:13:21.861121 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 19 10:13:21 crc kubenswrapper[4811]: I0219 10:13:21.866722 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 19 10:13:21 crc kubenswrapper[4811]: I0219 10:13:21.945745 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 19 10:13:21 crc kubenswrapper[4811]: I0219 10:13:21.953707 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 19 10:13:22 crc kubenswrapper[4811]: I0219 10:13:22.028174 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 19 10:13:22 crc kubenswrapper[4811]: I0219 10:13:22.031924 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 19 10:13:22 crc kubenswrapper[4811]: I0219 10:13:22.079467 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 19 10:13:22 crc kubenswrapper[4811]: I0219 10:13:22.177372 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 19 10:13:22 crc kubenswrapper[4811]: I0219 10:13:22.221612 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 19 10:13:22 crc kubenswrapper[4811]: I0219 10:13:22.225658 4811 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 19 10:13:22 crc kubenswrapper[4811]: I0219 10:13:22.257287 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 19 10:13:22 crc kubenswrapper[4811]: I0219 10:13:22.384398 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 19 10:13:22 crc kubenswrapper[4811]: I0219 10:13:22.421298 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 19 10:13:22 crc kubenswrapper[4811]: I0219 10:13:22.507048 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 19 10:13:22 crc kubenswrapper[4811]: I0219 10:13:22.658129 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 19 10:13:22 crc kubenswrapper[4811]: I0219 10:13:22.825439 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 19 10:13:22 crc kubenswrapper[4811]: I0219 10:13:22.856869 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 19 10:13:22 crc kubenswrapper[4811]: I0219 10:13:22.938311 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 19 10:13:22 crc kubenswrapper[4811]: I0219 10:13:22.982616 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 19 10:13:23 crc kubenswrapper[4811]: I0219 10:13:23.193878 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 19 10:13:23 crc kubenswrapper[4811]: I0219 10:13:23.196848 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 10:13:23 crc kubenswrapper[4811]: I0219 10:13:23.339842 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 19 10:13:23 crc kubenswrapper[4811]: I0219 10:13:23.570825 4811 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 19 10:13:23 crc kubenswrapper[4811]: I0219 10:13:23.584166 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 19 10:13:23 crc kubenswrapper[4811]: I0219 10:13:23.610947 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 19 10:13:23 crc kubenswrapper[4811]: I0219 10:13:23.636908 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 19 10:13:23 crc kubenswrapper[4811]: I0219 10:13:23.759339 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 19 10:13:23 crc kubenswrapper[4811]: I0219 10:13:23.760908 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 19 10:13:23 crc kubenswrapper[4811]: I0219 10:13:23.863231 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 10:13:24 crc kubenswrapper[4811]: I0219 10:13:24.277843 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 19 10:13:24 crc kubenswrapper[4811]: I0219 10:13:24.310555 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 19 10:13:24 crc kubenswrapper[4811]: I0219 10:13:24.415695 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 19 10:13:24 crc kubenswrapper[4811]: I0219 10:13:24.466279 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 19 10:13:24 crc kubenswrapper[4811]: I0219 10:13:24.652255 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 19 10:13:24 crc kubenswrapper[4811]: I0219 10:13:24.872722 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 19 10:13:24 crc kubenswrapper[4811]: I0219 10:13:24.987529 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 19 10:13:25 crc kubenswrapper[4811]: I0219 10:13:25.513676 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 19 10:13:25 crc kubenswrapper[4811]: I0219 10:13:25.686085 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 19 10:13:25 crc kubenswrapper[4811]: I0219 10:13:25.713057 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 19 10:13:26 crc kubenswrapper[4811]: I0219 10:13:26.148681 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 19 10:13:26 crc kubenswrapper[4811]: I0219 10:13:26.290150 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 19 10:13:26 crc kubenswrapper[4811]: I0219 10:13:26.290229 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 10:13:26 crc kubenswrapper[4811]: I0219 10:13:26.351859 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 10:13:26 crc kubenswrapper[4811]: I0219 10:13:26.351910 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 10:13:26 crc kubenswrapper[4811]: I0219 10:13:26.351939 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 10:13:26 crc kubenswrapper[4811]: I0219 10:13:26.351999 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 10:13:26 crc kubenswrapper[4811]: I0219 10:13:26.352002 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:13:26 crc kubenswrapper[4811]: I0219 10:13:26.352022 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 10:13:26 crc kubenswrapper[4811]: I0219 10:13:26.352058 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:13:26 crc kubenswrapper[4811]: I0219 10:13:26.352149 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:13:26 crc kubenswrapper[4811]: I0219 10:13:26.352207 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:13:26 crc kubenswrapper[4811]: I0219 10:13:26.352185 4811 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 19 10:13:26 crc kubenswrapper[4811]: I0219 10:13:26.352283 4811 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 19 10:13:26 crc kubenswrapper[4811]: I0219 10:13:26.361559 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:13:26 crc kubenswrapper[4811]: I0219 10:13:26.452739 4811 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 10:13:26 crc kubenswrapper[4811]: I0219 10:13:26.452809 4811 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 19 10:13:26 crc kubenswrapper[4811]: I0219 10:13:26.452819 4811 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 10:13:26 crc kubenswrapper[4811]: I0219 10:13:26.483332 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 19 10:13:26 crc kubenswrapper[4811]: I0219 10:13:26.483385 4811 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="1595a7f0a0f6f04a34a1059cb7463840021c0abca1375ec3d99bc79a4d769275" exitCode=137 Feb 19 10:13:26 crc kubenswrapper[4811]: I0219 10:13:26.483426 4811 scope.go:117] "RemoveContainer" containerID="1595a7f0a0f6f04a34a1059cb7463840021c0abca1375ec3d99bc79a4d769275" Feb 19 10:13:26 crc kubenswrapper[4811]: I0219 10:13:26.483553 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 10:13:26 crc kubenswrapper[4811]: I0219 10:13:26.500637 4811 scope.go:117] "RemoveContainer" containerID="1595a7f0a0f6f04a34a1059cb7463840021c0abca1375ec3d99bc79a4d769275" Feb 19 10:13:26 crc kubenswrapper[4811]: E0219 10:13:26.500953 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1595a7f0a0f6f04a34a1059cb7463840021c0abca1375ec3d99bc79a4d769275\": container with ID starting with 1595a7f0a0f6f04a34a1059cb7463840021c0abca1375ec3d99bc79a4d769275 not found: ID does not exist" containerID="1595a7f0a0f6f04a34a1059cb7463840021c0abca1375ec3d99bc79a4d769275" Feb 19 10:13:26 crc kubenswrapper[4811]: I0219 10:13:26.500991 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1595a7f0a0f6f04a34a1059cb7463840021c0abca1375ec3d99bc79a4d769275"} err="failed to get container status \"1595a7f0a0f6f04a34a1059cb7463840021c0abca1375ec3d99bc79a4d769275\": rpc error: code = NotFound desc = could not find container \"1595a7f0a0f6f04a34a1059cb7463840021c0abca1375ec3d99bc79a4d769275\": container with ID starting with 1595a7f0a0f6f04a34a1059cb7463840021c0abca1375ec3d99bc79a4d769275 not found: ID does not exist" Feb 19 10:13:26 crc kubenswrapper[4811]: I0219 10:13:26.552070 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 19 10:13:26 crc kubenswrapper[4811]: I0219 10:13:26.593247 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 19 10:13:27 crc kubenswrapper[4811]: I0219 10:13:27.246572 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 19 10:13:27 crc kubenswrapper[4811]: I0219 10:13:27.267595 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 19 10:13:27 crc kubenswrapper[4811]: I0219 10:13:27.486369 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 19 10:13:27 crc kubenswrapper[4811]: I0219 10:13:27.766730 4811 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 19 10:13:32 crc kubenswrapper[4811]: I0219 10:13:32.293365 4811 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.074158 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-24tbz"] Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.075505 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-24tbz" podUID="2fce0222-ff8a-4a92-8c75-bdbd44f509d5" containerName="controller-manager" containerID="cri-o://6a59f9e5de37b5023d70f4c43f5ec5f80c9b0b93b2e128a46d2934b0ec6fa7bd" gracePeriod=30 Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.177289 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kpjc5"] Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.177684 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kpjc5" podUID="9a6532e9-cca3-47cf-89e4-77fac53a5d18" containerName="route-controller-manager" containerID="cri-o://907b2ebb7ea0d0be849fa47538cc5540663702d343954c3b078194626a78ba14" gracePeriod=30 Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.460501 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-24tbz" Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.581704 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fce0222-ff8a-4a92-8c75-bdbd44f509d5-serving-cert\") pod \"2fce0222-ff8a-4a92-8c75-bdbd44f509d5\" (UID: \"2fce0222-ff8a-4a92-8c75-bdbd44f509d5\") " Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.581780 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dt8bx\" (UniqueName: \"kubernetes.io/projected/2fce0222-ff8a-4a92-8c75-bdbd44f509d5-kube-api-access-dt8bx\") pod \"2fce0222-ff8a-4a92-8c75-bdbd44f509d5\" (UID: \"2fce0222-ff8a-4a92-8c75-bdbd44f509d5\") " Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.581812 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2fce0222-ff8a-4a92-8c75-bdbd44f509d5-client-ca\") pod \"2fce0222-ff8a-4a92-8c75-bdbd44f509d5\" (UID: \"2fce0222-ff8a-4a92-8c75-bdbd44f509d5\") " Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.581863 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2fce0222-ff8a-4a92-8c75-bdbd44f509d5-proxy-ca-bundles\") pod \"2fce0222-ff8a-4a92-8c75-bdbd44f509d5\" (UID: \"2fce0222-ff8a-4a92-8c75-bdbd44f509d5\") " Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.581896 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fce0222-ff8a-4a92-8c75-bdbd44f509d5-config\") pod \"2fce0222-ff8a-4a92-8c75-bdbd44f509d5\" (UID: \"2fce0222-ff8a-4a92-8c75-bdbd44f509d5\") " Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.583276 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fce0222-ff8a-4a92-8c75-bdbd44f509d5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2fce0222-ff8a-4a92-8c75-bdbd44f509d5" (UID: "2fce0222-ff8a-4a92-8c75-bdbd44f509d5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.583338 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fce0222-ff8a-4a92-8c75-bdbd44f509d5-client-ca" (OuterVolumeSpecName: "client-ca") pod "2fce0222-ff8a-4a92-8c75-bdbd44f509d5" (UID: "2fce0222-ff8a-4a92-8c75-bdbd44f509d5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.584785 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fce0222-ff8a-4a92-8c75-bdbd44f509d5-config" (OuterVolumeSpecName: "config") pod "2fce0222-ff8a-4a92-8c75-bdbd44f509d5" (UID: "2fce0222-ff8a-4a92-8c75-bdbd44f509d5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.591869 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fce0222-ff8a-4a92-8c75-bdbd44f509d5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2fce0222-ff8a-4a92-8c75-bdbd44f509d5" (UID: "2fce0222-ff8a-4a92-8c75-bdbd44f509d5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.592412 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fce0222-ff8a-4a92-8c75-bdbd44f509d5-kube-api-access-dt8bx" (OuterVolumeSpecName: "kube-api-access-dt8bx") pod "2fce0222-ff8a-4a92-8c75-bdbd44f509d5" (UID: "2fce0222-ff8a-4a92-8c75-bdbd44f509d5"). InnerVolumeSpecName "kube-api-access-dt8bx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.610915 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kpjc5" Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.630809 4811 generic.go:334] "Generic (PLEG): container finished" podID="2fce0222-ff8a-4a92-8c75-bdbd44f509d5" containerID="6a59f9e5de37b5023d70f4c43f5ec5f80c9b0b93b2e128a46d2934b0ec6fa7bd" exitCode=0 Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.630885 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-24tbz" event={"ID":"2fce0222-ff8a-4a92-8c75-bdbd44f509d5","Type":"ContainerDied","Data":"6a59f9e5de37b5023d70f4c43f5ec5f80c9b0b93b2e128a46d2934b0ec6fa7bd"} Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.630917 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-24tbz" event={"ID":"2fce0222-ff8a-4a92-8c75-bdbd44f509d5","Type":"ContainerDied","Data":"a9b1c9f42a3f66118bcde72c8c05aeb2c9605ea98cfc5f89fbd3e85a23cc4edc"} Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.630935 4811 scope.go:117] "RemoveContainer" containerID="6a59f9e5de37b5023d70f4c43f5ec5f80c9b0b93b2e128a46d2934b0ec6fa7bd" Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.631076 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-24tbz" Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.641126 4811 generic.go:334] "Generic (PLEG): container finished" podID="9a6532e9-cca3-47cf-89e4-77fac53a5d18" containerID="907b2ebb7ea0d0be849fa47538cc5540663702d343954c3b078194626a78ba14" exitCode=0 Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.641199 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kpjc5" event={"ID":"9a6532e9-cca3-47cf-89e4-77fac53a5d18","Type":"ContainerDied","Data":"907b2ebb7ea0d0be849fa47538cc5540663702d343954c3b078194626a78ba14"} Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.641242 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kpjc5" event={"ID":"9a6532e9-cca3-47cf-89e4-77fac53a5d18","Type":"ContainerDied","Data":"ddbae4ea92da1fd7db167c18581b9d00a3f66b72d476e4e4d06dd0ff3e3b5bca"} Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.642746 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kpjc5" Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.658422 4811 scope.go:117] "RemoveContainer" containerID="6a59f9e5de37b5023d70f4c43f5ec5f80c9b0b93b2e128a46d2934b0ec6fa7bd" Feb 19 10:13:48 crc kubenswrapper[4811]: E0219 10:13:48.658916 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a59f9e5de37b5023d70f4c43f5ec5f80c9b0b93b2e128a46d2934b0ec6fa7bd\": container with ID starting with 6a59f9e5de37b5023d70f4c43f5ec5f80c9b0b93b2e128a46d2934b0ec6fa7bd not found: ID does not exist" containerID="6a59f9e5de37b5023d70f4c43f5ec5f80c9b0b93b2e128a46d2934b0ec6fa7bd" Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.658951 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a59f9e5de37b5023d70f4c43f5ec5f80c9b0b93b2e128a46d2934b0ec6fa7bd"} err="failed to get container status \"6a59f9e5de37b5023d70f4c43f5ec5f80c9b0b93b2e128a46d2934b0ec6fa7bd\": rpc error: code = NotFound desc = could not find container \"6a59f9e5de37b5023d70f4c43f5ec5f80c9b0b93b2e128a46d2934b0ec6fa7bd\": container with ID starting with 6a59f9e5de37b5023d70f4c43f5ec5f80c9b0b93b2e128a46d2934b0ec6fa7bd not found: ID does not exist" Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.658982 4811 scope.go:117] "RemoveContainer" containerID="907b2ebb7ea0d0be849fa47538cc5540663702d343954c3b078194626a78ba14" Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.677789 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-24tbz"] Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.680504 4811 scope.go:117] "RemoveContainer" containerID="907b2ebb7ea0d0be849fa47538cc5540663702d343954c3b078194626a78ba14" Feb 19 10:13:48 crc kubenswrapper[4811]: E0219 10:13:48.681757 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"907b2ebb7ea0d0be849fa47538cc5540663702d343954c3b078194626a78ba14\": container with ID starting with 907b2ebb7ea0d0be849fa47538cc5540663702d343954c3b078194626a78ba14 not found: ID does not exist" containerID="907b2ebb7ea0d0be849fa47538cc5540663702d343954c3b078194626a78ba14" Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.681980 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"907b2ebb7ea0d0be849fa47538cc5540663702d343954c3b078194626a78ba14"} err="failed to get container status \"907b2ebb7ea0d0be849fa47538cc5540663702d343954c3b078194626a78ba14\": rpc error: code = NotFound desc = could not find container \"907b2ebb7ea0d0be849fa47538cc5540663702d343954c3b078194626a78ba14\": container with ID starting with 907b2ebb7ea0d0be849fa47538cc5540663702d343954c3b078194626a78ba14 not found: ID does not exist" Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.683575 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-24tbz"] Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.684009 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fce0222-ff8a-4a92-8c75-bdbd44f509d5-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.684037 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fce0222-ff8a-4a92-8c75-bdbd44f509d5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.684050 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dt8bx\" (UniqueName: \"kubernetes.io/projected/2fce0222-ff8a-4a92-8c75-bdbd44f509d5-kube-api-access-dt8bx\") on node \"crc\" DevicePath \"\"" Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.684062 4811 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2fce0222-ff8a-4a92-8c75-bdbd44f509d5-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.684072 4811 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2fce0222-ff8a-4a92-8c75-bdbd44f509d5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.785587 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sc7f9\" (UniqueName: \"kubernetes.io/projected/9a6532e9-cca3-47cf-89e4-77fac53a5d18-kube-api-access-sc7f9\") pod \"9a6532e9-cca3-47cf-89e4-77fac53a5d18\" (UID: \"9a6532e9-cca3-47cf-89e4-77fac53a5d18\") " Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.785731 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a6532e9-cca3-47cf-89e4-77fac53a5d18-client-ca\") pod \"9a6532e9-cca3-47cf-89e4-77fac53a5d18\" (UID: \"9a6532e9-cca3-47cf-89e4-77fac53a5d18\") " Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.785829 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a6532e9-cca3-47cf-89e4-77fac53a5d18-config\") pod \"9a6532e9-cca3-47cf-89e4-77fac53a5d18\" (UID: \"9a6532e9-cca3-47cf-89e4-77fac53a5d18\") " Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.785865 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a6532e9-cca3-47cf-89e4-77fac53a5d18-serving-cert\") pod \"9a6532e9-cca3-47cf-89e4-77fac53a5d18\" (UID: \"9a6532e9-cca3-47cf-89e4-77fac53a5d18\") " Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.787298 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a6532e9-cca3-47cf-89e4-77fac53a5d18-client-ca" (OuterVolumeSpecName: "client-ca") pod "9a6532e9-cca3-47cf-89e4-77fac53a5d18" (UID: "9a6532e9-cca3-47cf-89e4-77fac53a5d18"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.787499 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a6532e9-cca3-47cf-89e4-77fac53a5d18-config" (OuterVolumeSpecName: "config") pod "9a6532e9-cca3-47cf-89e4-77fac53a5d18" (UID: "9a6532e9-cca3-47cf-89e4-77fac53a5d18"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.790489 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6532e9-cca3-47cf-89e4-77fac53a5d18-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9a6532e9-cca3-47cf-89e4-77fac53a5d18" (UID: "9a6532e9-cca3-47cf-89e4-77fac53a5d18"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.790529 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a6532e9-cca3-47cf-89e4-77fac53a5d18-kube-api-access-sc7f9" (OuterVolumeSpecName: "kube-api-access-sc7f9") pod "9a6532e9-cca3-47cf-89e4-77fac53a5d18" (UID: "9a6532e9-cca3-47cf-89e4-77fac53a5d18"). InnerVolumeSpecName "kube-api-access-sc7f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.887181 4811 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9a6532e9-cca3-47cf-89e4-77fac53a5d18-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.887218 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a6532e9-cca3-47cf-89e4-77fac53a5d18-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.887226 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a6532e9-cca3-47cf-89e4-77fac53a5d18-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.887236 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sc7f9\" (UniqueName: \"kubernetes.io/projected/9a6532e9-cca3-47cf-89e4-77fac53a5d18-kube-api-access-sc7f9\") on node \"crc\" DevicePath \"\"" Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.973193 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kpjc5"] Feb 19 10:13:48 crc kubenswrapper[4811]: I0219 10:13:48.976063 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kpjc5"] Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.448249 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6c5c8764-8gq58"] Feb 19 10:13:49 crc kubenswrapper[4811]: E0219 10:13:49.448574 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.448590 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 10:13:49 crc kubenswrapper[4811]: E0219 10:13:49.448607 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4c2ec9c-52c4-442e-adf8-eddb1d74d250" containerName="extract-utilities" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.448614 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4c2ec9c-52c4-442e-adf8-eddb1d74d250" containerName="extract-utilities" Feb 19 10:13:49 crc kubenswrapper[4811]: E0219 10:13:49.448636 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4c2ec9c-52c4-442e-adf8-eddb1d74d250" containerName="extract-content" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.448643 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4c2ec9c-52c4-442e-adf8-eddb1d74d250" containerName="extract-content" Feb 19 10:13:49 crc kubenswrapper[4811]: E0219 10:13:49.448653 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a6532e9-cca3-47cf-89e4-77fac53a5d18" containerName="route-controller-manager" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.448661 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a6532e9-cca3-47cf-89e4-77fac53a5d18" containerName="route-controller-manager" Feb 19 10:13:49 crc kubenswrapper[4811]: E0219 10:13:49.448670 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e395259-92d6-4fa4-8eb1-da9246b3626b" containerName="installer" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.448677 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e395259-92d6-4fa4-8eb1-da9246b3626b" containerName="installer" Feb 19 10:13:49 crc kubenswrapper[4811]: E0219 10:13:49.448686 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4c2ec9c-52c4-442e-adf8-eddb1d74d250" containerName="registry-server" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.448693 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4c2ec9c-52c4-442e-adf8-eddb1d74d250" containerName="registry-server" Feb 19 10:13:49 crc kubenswrapper[4811]: E0219 10:13:49.448701 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fce0222-ff8a-4a92-8c75-bdbd44f509d5" containerName="controller-manager" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.448707 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fce0222-ff8a-4a92-8c75-bdbd44f509d5" containerName="controller-manager" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.449193 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.449232 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4c2ec9c-52c4-442e-adf8-eddb1d74d250" containerName="registry-server" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.449253 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e395259-92d6-4fa4-8eb1-da9246b3626b" containerName="installer" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.449264 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a6532e9-cca3-47cf-89e4-77fac53a5d18" containerName="route-controller-manager" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.449287 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fce0222-ff8a-4a92-8c75-bdbd44f509d5" containerName="controller-manager" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.450073 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c5c8764-8gq58" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.452533 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54cf45ddb6-npmh4"] Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.453590 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54cf45ddb6-npmh4" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.459008 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.460012 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.462897 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.465398 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.465477 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.465553 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.465646 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.465755 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.465848 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.465939 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.466327 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.466487 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.466553 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.511839 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54cf45ddb6-npmh4"] Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.523063 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6c5c8764-8gq58"] Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.605948 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da463ff9-6351-49e3-9bdc-a07b01d3d71e-client-ca\") pod \"route-controller-manager-54cf45ddb6-npmh4\" (UID: \"da463ff9-6351-49e3-9bdc-a07b01d3d71e\") " pod="openshift-route-controller-manager/route-controller-manager-54cf45ddb6-npmh4" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.606008 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a60ff98-02c4-4af6-a628-ffb1f3e8d477-serving-cert\") pod \"controller-manager-6c5c8764-8gq58\" (UID: \"2a60ff98-02c4-4af6-a628-ffb1f3e8d477\") " pod="openshift-controller-manager/controller-manager-6c5c8764-8gq58" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.606043 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a60ff98-02c4-4af6-a628-ffb1f3e8d477-config\") pod \"controller-manager-6c5c8764-8gq58\" (UID: \"2a60ff98-02c4-4af6-a628-ffb1f3e8d477\") " pod="openshift-controller-manager/controller-manager-6c5c8764-8gq58" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.606362 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da463ff9-6351-49e3-9bdc-a07b01d3d71e-serving-cert\") pod \"route-controller-manager-54cf45ddb6-npmh4\" (UID: \"da463ff9-6351-49e3-9bdc-a07b01d3d71e\") " pod="openshift-route-controller-manager/route-controller-manager-54cf45ddb6-npmh4" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.606569 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a60ff98-02c4-4af6-a628-ffb1f3e8d477-proxy-ca-bundles\") pod \"controller-manager-6c5c8764-8gq58\" (UID: \"2a60ff98-02c4-4af6-a628-ffb1f3e8d477\") " pod="openshift-controller-manager/controller-manager-6c5c8764-8gq58" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.606638 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hj97\" (UniqueName: \"kubernetes.io/projected/da463ff9-6351-49e3-9bdc-a07b01d3d71e-kube-api-access-8hj97\") pod \"route-controller-manager-54cf45ddb6-npmh4\" (UID: \"da463ff9-6351-49e3-9bdc-a07b01d3d71e\") " pod="openshift-route-controller-manager/route-controller-manager-54cf45ddb6-npmh4" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.606685 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a60ff98-02c4-4af6-a628-ffb1f3e8d477-client-ca\") pod \"controller-manager-6c5c8764-8gq58\" (UID: \"2a60ff98-02c4-4af6-a628-ffb1f3e8d477\") " pod="openshift-controller-manager/controller-manager-6c5c8764-8gq58" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.606799 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz86p\" (UniqueName: \"kubernetes.io/projected/2a60ff98-02c4-4af6-a628-ffb1f3e8d477-kube-api-access-pz86p\") pod \"controller-manager-6c5c8764-8gq58\" (UID: \"2a60ff98-02c4-4af6-a628-ffb1f3e8d477\") " pod="openshift-controller-manager/controller-manager-6c5c8764-8gq58" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.607513 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da463ff9-6351-49e3-9bdc-a07b01d3d71e-config\") pod \"route-controller-manager-54cf45ddb6-npmh4\" (UID: \"da463ff9-6351-49e3-9bdc-a07b01d3d71e\") " pod="openshift-route-controller-manager/route-controller-manager-54cf45ddb6-npmh4" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.709093 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da463ff9-6351-49e3-9bdc-a07b01d3d71e-serving-cert\") pod \"route-controller-manager-54cf45ddb6-npmh4\" (UID: \"da463ff9-6351-49e3-9bdc-a07b01d3d71e\") " pod="openshift-route-controller-manager/route-controller-manager-54cf45ddb6-npmh4" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.709195 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a60ff98-02c4-4af6-a628-ffb1f3e8d477-proxy-ca-bundles\") pod \"controller-manager-6c5c8764-8gq58\" (UID: \"2a60ff98-02c4-4af6-a628-ffb1f3e8d477\") " pod="openshift-controller-manager/controller-manager-6c5c8764-8gq58" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.709239 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hj97\" (UniqueName: \"kubernetes.io/projected/da463ff9-6351-49e3-9bdc-a07b01d3d71e-kube-api-access-8hj97\") pod \"route-controller-manager-54cf45ddb6-npmh4\" (UID: \"da463ff9-6351-49e3-9bdc-a07b01d3d71e\") " pod="openshift-route-controller-manager/route-controller-manager-54cf45ddb6-npmh4" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.709273 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a60ff98-02c4-4af6-a628-ffb1f3e8d477-client-ca\") pod \"controller-manager-6c5c8764-8gq58\" (UID: \"2a60ff98-02c4-4af6-a628-ffb1f3e8d477\") " pod="openshift-controller-manager/controller-manager-6c5c8764-8gq58" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.709371 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz86p\" (UniqueName: \"kubernetes.io/projected/2a60ff98-02c4-4af6-a628-ffb1f3e8d477-kube-api-access-pz86p\") pod \"controller-manager-6c5c8764-8gq58\" (UID: \"2a60ff98-02c4-4af6-a628-ffb1f3e8d477\") " pod="openshift-controller-manager/controller-manager-6c5c8764-8gq58" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.709481 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da463ff9-6351-49e3-9bdc-a07b01d3d71e-config\") pod \"route-controller-manager-54cf45ddb6-npmh4\" (UID: \"da463ff9-6351-49e3-9bdc-a07b01d3d71e\") " pod="openshift-route-controller-manager/route-controller-manager-54cf45ddb6-npmh4" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.709525 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da463ff9-6351-49e3-9bdc-a07b01d3d71e-client-ca\") pod \"route-controller-manager-54cf45ddb6-npmh4\" (UID: \"da463ff9-6351-49e3-9bdc-a07b01d3d71e\") " pod="openshift-route-controller-manager/route-controller-manager-54cf45ddb6-npmh4" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.709559 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a60ff98-02c4-4af6-a628-ffb1f3e8d477-serving-cert\") pod \"controller-manager-6c5c8764-8gq58\" (UID: \"2a60ff98-02c4-4af6-a628-ffb1f3e8d477\") " pod="openshift-controller-manager/controller-manager-6c5c8764-8gq58" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.709596 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a60ff98-02c4-4af6-a628-ffb1f3e8d477-config\") pod \"controller-manager-6c5c8764-8gq58\" (UID: \"2a60ff98-02c4-4af6-a628-ffb1f3e8d477\") " pod="openshift-controller-manager/controller-manager-6c5c8764-8gq58" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.711911 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da463ff9-6351-49e3-9bdc-a07b01d3d71e-client-ca\") pod \"route-controller-manager-54cf45ddb6-npmh4\" (UID: \"da463ff9-6351-49e3-9bdc-a07b01d3d71e\") " pod="openshift-route-controller-manager/route-controller-manager-54cf45ddb6-npmh4" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.712133 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a60ff98-02c4-4af6-a628-ffb1f3e8d477-config\") pod \"controller-manager-6c5c8764-8gq58\" (UID: \"2a60ff98-02c4-4af6-a628-ffb1f3e8d477\") " pod="openshift-controller-manager/controller-manager-6c5c8764-8gq58" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.712178 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a60ff98-02c4-4af6-a628-ffb1f3e8d477-proxy-ca-bundles\") pod \"controller-manager-6c5c8764-8gq58\" (UID: \"2a60ff98-02c4-4af6-a628-ffb1f3e8d477\") " pod="openshift-controller-manager/controller-manager-6c5c8764-8gq58" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.712268 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a60ff98-02c4-4af6-a628-ffb1f3e8d477-client-ca\") pod \"controller-manager-6c5c8764-8gq58\" (UID: \"2a60ff98-02c4-4af6-a628-ffb1f3e8d477\") " pod="openshift-controller-manager/controller-manager-6c5c8764-8gq58" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.712928 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da463ff9-6351-49e3-9bdc-a07b01d3d71e-config\") pod \"route-controller-manager-54cf45ddb6-npmh4\" (UID: \"da463ff9-6351-49e3-9bdc-a07b01d3d71e\") " pod="openshift-route-controller-manager/route-controller-manager-54cf45ddb6-npmh4" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.715838 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da463ff9-6351-49e3-9bdc-a07b01d3d71e-serving-cert\") pod \"route-controller-manager-54cf45ddb6-npmh4\" (UID: \"da463ff9-6351-49e3-9bdc-a07b01d3d71e\") " pod="openshift-route-controller-manager/route-controller-manager-54cf45ddb6-npmh4" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.716897 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a60ff98-02c4-4af6-a628-ffb1f3e8d477-serving-cert\") pod \"controller-manager-6c5c8764-8gq58\" (UID: \"2a60ff98-02c4-4af6-a628-ffb1f3e8d477\") " pod="openshift-controller-manager/controller-manager-6c5c8764-8gq58" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.737127 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hj97\" (UniqueName: \"kubernetes.io/projected/da463ff9-6351-49e3-9bdc-a07b01d3d71e-kube-api-access-8hj97\") pod \"route-controller-manager-54cf45ddb6-npmh4\" (UID: \"da463ff9-6351-49e3-9bdc-a07b01d3d71e\") " pod="openshift-route-controller-manager/route-controller-manager-54cf45ddb6-npmh4" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.739397 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz86p\" (UniqueName: \"kubernetes.io/projected/2a60ff98-02c4-4af6-a628-ffb1f3e8d477-kube-api-access-pz86p\") pod \"controller-manager-6c5c8764-8gq58\" (UID: \"2a60ff98-02c4-4af6-a628-ffb1f3e8d477\") " pod="openshift-controller-manager/controller-manager-6c5c8764-8gq58" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.811531 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c5c8764-8gq58" Feb 19 10:13:49 crc kubenswrapper[4811]: I0219 10:13:49.822647 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54cf45ddb6-npmh4" Feb 19 10:13:50 crc kubenswrapper[4811]: I0219 10:13:50.117924 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54cf45ddb6-npmh4"] Feb 19 10:13:50 crc kubenswrapper[4811]: W0219 10:13:50.132633 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda463ff9_6351_49e3_9bdc_a07b01d3d71e.slice/crio-0d9aa01d2e0cb879e7b668bf0fe0c779854ec9dbba3f22563e1755d1ca762d9a WatchSource:0}: Error finding container 0d9aa01d2e0cb879e7b668bf0fe0c779854ec9dbba3f22563e1755d1ca762d9a: Status 404 returned error can't find the container with id 0d9aa01d2e0cb879e7b668bf0fe0c779854ec9dbba3f22563e1755d1ca762d9a Feb 19 10:13:50 crc kubenswrapper[4811]: I0219 10:13:50.171800 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6c5c8764-8gq58"] Feb 19 10:13:50 crc kubenswrapper[4811]: I0219 10:13:50.590373 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fce0222-ff8a-4a92-8c75-bdbd44f509d5" path="/var/lib/kubelet/pods/2fce0222-ff8a-4a92-8c75-bdbd44f509d5/volumes" Feb 19 10:13:50 crc kubenswrapper[4811]: I0219 10:13:50.591845 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a6532e9-cca3-47cf-89e4-77fac53a5d18" path="/var/lib/kubelet/pods/9a6532e9-cca3-47cf-89e4-77fac53a5d18/volumes" Feb 19 10:13:50 crc kubenswrapper[4811]: I0219 10:13:50.656337 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54cf45ddb6-npmh4" event={"ID":"da463ff9-6351-49e3-9bdc-a07b01d3d71e","Type":"ContainerStarted","Data":"c14e5f8bd2a3797aab6dfac6394b751fe69a0eabcfa611a5e937ac18a8b3ecf9"} Feb 19 10:13:50 crc kubenswrapper[4811]: I0219 10:13:50.657640 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54cf45ddb6-npmh4" event={"ID":"da463ff9-6351-49e3-9bdc-a07b01d3d71e","Type":"ContainerStarted","Data":"0d9aa01d2e0cb879e7b668bf0fe0c779854ec9dbba3f22563e1755d1ca762d9a"} Feb 19 10:13:50 crc kubenswrapper[4811]: I0219 10:13:50.657762 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-54cf45ddb6-npmh4" Feb 19 10:13:50 crc kubenswrapper[4811]: I0219 10:13:50.658506 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c5c8764-8gq58" event={"ID":"2a60ff98-02c4-4af6-a628-ffb1f3e8d477","Type":"ContainerStarted","Data":"fc0d6143b63c6ec46f545feb0723adaf0d56576b8f3ed3ddaf8b8c4802880fb0"} Feb 19 10:13:50 crc kubenswrapper[4811]: I0219 10:13:50.658573 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c5c8764-8gq58" event={"ID":"2a60ff98-02c4-4af6-a628-ffb1f3e8d477","Type":"ContainerStarted","Data":"24150ddd9030f033e34124cb6dfd25e7af3cf2f338885be8a5305e5e5b517d6c"} Feb 19 10:13:50 crc kubenswrapper[4811]: I0219 10:13:50.659259 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6c5c8764-8gq58" Feb 19 10:13:50 crc kubenswrapper[4811]: I0219 10:13:50.684142 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6c5c8764-8gq58" Feb 19 10:13:50 crc kubenswrapper[4811]: I0219 10:13:50.701639 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-54cf45ddb6-npmh4" podStartSLOduration=2.701601571 podStartE2EDuration="2.701601571s" podCreationTimestamp="2026-02-19 10:13:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:13:50.693784701 +0000 UTC m=+319.220249387" watchObservedRunningTime="2026-02-19 10:13:50.701601571 +0000 UTC m=+319.228066257" Feb 19 10:13:50 crc kubenswrapper[4811]: I0219 10:13:50.724749 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6c5c8764-8gq58" podStartSLOduration=2.724722375 podStartE2EDuration="2.724722375s" podCreationTimestamp="2026-02-19 10:13:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:13:50.720032854 +0000 UTC m=+319.246497540" watchObservedRunningTime="2026-02-19 10:13:50.724722375 +0000 UTC m=+319.251187051" Feb 19 10:13:50 crc kubenswrapper[4811]: I0219 10:13:50.851386 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-54cf45ddb6-npmh4" Feb 19 10:14:12 crc kubenswrapper[4811]: I0219 10:14:12.702678 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7qnh"] Feb 19 10:14:12 crc kubenswrapper[4811]: I0219 10:14:12.704120 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q7qnh" podUID="0f0825a4-4e84-49d7-929e-165ced3ed2f8" containerName="registry-server" containerID="cri-o://1bd85b47fc39ae553708649c45b306ef907582b08a10dc180248263343731b9f" gracePeriod=2 Feb 19 10:14:13 crc kubenswrapper[4811]: I0219 10:14:13.185934 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q7qnh" Feb 19 10:14:13 crc kubenswrapper[4811]: I0219 10:14:13.358382 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vmvs\" (UniqueName: \"kubernetes.io/projected/0f0825a4-4e84-49d7-929e-165ced3ed2f8-kube-api-access-6vmvs\") pod \"0f0825a4-4e84-49d7-929e-165ced3ed2f8\" (UID: \"0f0825a4-4e84-49d7-929e-165ced3ed2f8\") " Feb 19 10:14:13 crc kubenswrapper[4811]: I0219 10:14:13.358533 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f0825a4-4e84-49d7-929e-165ced3ed2f8-catalog-content\") pod \"0f0825a4-4e84-49d7-929e-165ced3ed2f8\" (UID: \"0f0825a4-4e84-49d7-929e-165ced3ed2f8\") " Feb 19 10:14:13 crc kubenswrapper[4811]: I0219 10:14:13.358685 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f0825a4-4e84-49d7-929e-165ced3ed2f8-utilities\") pod \"0f0825a4-4e84-49d7-929e-165ced3ed2f8\" (UID: \"0f0825a4-4e84-49d7-929e-165ced3ed2f8\") " Feb 19 10:14:13 crc kubenswrapper[4811]: I0219 10:14:13.360490 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f0825a4-4e84-49d7-929e-165ced3ed2f8-utilities" (OuterVolumeSpecName: "utilities") pod "0f0825a4-4e84-49d7-929e-165ced3ed2f8" (UID: "0f0825a4-4e84-49d7-929e-165ced3ed2f8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:14:13 crc kubenswrapper[4811]: I0219 10:14:13.371427 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f0825a4-4e84-49d7-929e-165ced3ed2f8-kube-api-access-6vmvs" (OuterVolumeSpecName: "kube-api-access-6vmvs") pod "0f0825a4-4e84-49d7-929e-165ced3ed2f8" (UID: "0f0825a4-4e84-49d7-929e-165ced3ed2f8"). InnerVolumeSpecName "kube-api-access-6vmvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:14:13 crc kubenswrapper[4811]: I0219 10:14:13.394086 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f0825a4-4e84-49d7-929e-165ced3ed2f8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0f0825a4-4e84-49d7-929e-165ced3ed2f8" (UID: "0f0825a4-4e84-49d7-929e-165ced3ed2f8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:14:13 crc kubenswrapper[4811]: I0219 10:14:13.460409 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vmvs\" (UniqueName: \"kubernetes.io/projected/0f0825a4-4e84-49d7-929e-165ced3ed2f8-kube-api-access-6vmvs\") on node \"crc\" DevicePath \"\"" Feb 19 10:14:13 crc kubenswrapper[4811]: I0219 10:14:13.460499 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f0825a4-4e84-49d7-929e-165ced3ed2f8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:14:13 crc kubenswrapper[4811]: I0219 10:14:13.460524 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f0825a4-4e84-49d7-929e-165ced3ed2f8-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:14:13 crc kubenswrapper[4811]: I0219 10:14:13.823558 4811 generic.go:334] "Generic (PLEG): container finished" podID="0f0825a4-4e84-49d7-929e-165ced3ed2f8" containerID="1bd85b47fc39ae553708649c45b306ef907582b08a10dc180248263343731b9f" exitCode=0 Feb 19 10:14:13 crc kubenswrapper[4811]: I0219 10:14:13.823648 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q7qnh" Feb 19 10:14:13 crc kubenswrapper[4811]: I0219 10:14:13.823662 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7qnh" event={"ID":"0f0825a4-4e84-49d7-929e-165ced3ed2f8","Type":"ContainerDied","Data":"1bd85b47fc39ae553708649c45b306ef907582b08a10dc180248263343731b9f"} Feb 19 10:14:13 crc kubenswrapper[4811]: I0219 10:14:13.823772 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7qnh" event={"ID":"0f0825a4-4e84-49d7-929e-165ced3ed2f8","Type":"ContainerDied","Data":"193ac46de448e3f035c8a84417b890634fad12dada53d4d502f8977125fbe775"} Feb 19 10:14:13 crc kubenswrapper[4811]: I0219 10:14:13.823806 4811 scope.go:117] "RemoveContainer" containerID="1bd85b47fc39ae553708649c45b306ef907582b08a10dc180248263343731b9f" Feb 19 10:14:13 crc kubenswrapper[4811]: I0219 10:14:13.846704 4811 scope.go:117] "RemoveContainer" containerID="9bb22657da7ef57230647db310adcc359a57da229293055d5bc81c198ac4427e" Feb 19 10:14:13 crc kubenswrapper[4811]: I0219 10:14:13.860767 4811 scope.go:117] "RemoveContainer" containerID="1912902d3daa2ab267307f0734427337957c7c1d7a0a71ce24bd805e0141fadf" Feb 19 10:14:13 crc kubenswrapper[4811]: I0219 10:14:13.895486 4811 scope.go:117] "RemoveContainer" containerID="1bd85b47fc39ae553708649c45b306ef907582b08a10dc180248263343731b9f" Feb 19 10:14:13 crc kubenswrapper[4811]: E0219 10:14:13.896039 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bd85b47fc39ae553708649c45b306ef907582b08a10dc180248263343731b9f\": container with ID starting with 1bd85b47fc39ae553708649c45b306ef907582b08a10dc180248263343731b9f not found: ID does not exist" containerID="1bd85b47fc39ae553708649c45b306ef907582b08a10dc180248263343731b9f" Feb 19 10:14:13 crc kubenswrapper[4811]: I0219 10:14:13.896090 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bd85b47fc39ae553708649c45b306ef907582b08a10dc180248263343731b9f"} err="failed to get container status \"1bd85b47fc39ae553708649c45b306ef907582b08a10dc180248263343731b9f\": rpc error: code = NotFound desc = could not find container \"1bd85b47fc39ae553708649c45b306ef907582b08a10dc180248263343731b9f\": container with ID starting with 1bd85b47fc39ae553708649c45b306ef907582b08a10dc180248263343731b9f not found: ID does not exist" Feb 19 10:14:13 crc kubenswrapper[4811]: I0219 10:14:13.896116 4811 scope.go:117] "RemoveContainer" containerID="9bb22657da7ef57230647db310adcc359a57da229293055d5bc81c198ac4427e" Feb 19 10:14:13 crc kubenswrapper[4811]: E0219 10:14:13.896485 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bb22657da7ef57230647db310adcc359a57da229293055d5bc81c198ac4427e\": container with ID starting with 9bb22657da7ef57230647db310adcc359a57da229293055d5bc81c198ac4427e not found: ID does not exist" containerID="9bb22657da7ef57230647db310adcc359a57da229293055d5bc81c198ac4427e" Feb 19 10:14:13 crc kubenswrapper[4811]: I0219 10:14:13.896528 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bb22657da7ef57230647db310adcc359a57da229293055d5bc81c198ac4427e"} err="failed to get container status \"9bb22657da7ef57230647db310adcc359a57da229293055d5bc81c198ac4427e\": rpc error: code = NotFound desc = could not find container \"9bb22657da7ef57230647db310adcc359a57da229293055d5bc81c198ac4427e\": container with ID starting with 9bb22657da7ef57230647db310adcc359a57da229293055d5bc81c198ac4427e not found: ID does not exist" Feb 19 10:14:13 crc kubenswrapper[4811]: I0219 10:14:13.896553 4811 scope.go:117] "RemoveContainer" containerID="1912902d3daa2ab267307f0734427337957c7c1d7a0a71ce24bd805e0141fadf" Feb 19 10:14:13 crc kubenswrapper[4811]: E0219 10:14:13.896854 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1912902d3daa2ab267307f0734427337957c7c1d7a0a71ce24bd805e0141fadf\": container with ID starting with 1912902d3daa2ab267307f0734427337957c7c1d7a0a71ce24bd805e0141fadf not found: ID does not exist" containerID="1912902d3daa2ab267307f0734427337957c7c1d7a0a71ce24bd805e0141fadf" Feb 19 10:14:13 crc kubenswrapper[4811]: I0219 10:14:13.896903 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1912902d3daa2ab267307f0734427337957c7c1d7a0a71ce24bd805e0141fadf"} err="failed to get container status \"1912902d3daa2ab267307f0734427337957c7c1d7a0a71ce24bd805e0141fadf\": rpc error: code = NotFound desc = could not find container \"1912902d3daa2ab267307f0734427337957c7c1d7a0a71ce24bd805e0141fadf\": container with ID starting with 1912902d3daa2ab267307f0734427337957c7c1d7a0a71ce24bd805e0141fadf not found: ID does not exist" Feb 19 10:14:13 crc kubenswrapper[4811]: I0219 10:14:13.924989 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7qnh"] Feb 19 10:14:13 crc kubenswrapper[4811]: I0219 10:14:13.928546 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7qnh"] Feb 19 10:14:14 crc kubenswrapper[4811]: I0219 10:14:14.589527 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f0825a4-4e84-49d7-929e-165ced3ed2f8" path="/var/lib/kubelet/pods/0f0825a4-4e84-49d7-929e-165ced3ed2f8/volumes" Feb 19 10:14:26 crc kubenswrapper[4811]: I0219 10:14:26.787994 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:14:26 crc kubenswrapper[4811]: I0219 10:14:26.788693 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:14:32 crc kubenswrapper[4811]: I0219 10:14:32.834627 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zq68z"] Feb 19 10:14:32 crc kubenswrapper[4811]: I0219 10:14:32.835701 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zq68z" podUID="f528e872-8fd6-4cfc-b4d5-129433fed8f6" containerName="registry-server" containerID="cri-o://1dc338fcc9a40512662c9c8391a79b0376d704050253c58615d5515bb7dd4784" gracePeriod=2 Feb 19 10:14:33 crc kubenswrapper[4811]: I0219 10:14:33.228662 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zq68z" Feb 19 10:14:33 crc kubenswrapper[4811]: I0219 10:14:33.335209 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krrpn\" (UniqueName: \"kubernetes.io/projected/f528e872-8fd6-4cfc-b4d5-129433fed8f6-kube-api-access-krrpn\") pod \"f528e872-8fd6-4cfc-b4d5-129433fed8f6\" (UID: \"f528e872-8fd6-4cfc-b4d5-129433fed8f6\") " Feb 19 10:14:33 crc kubenswrapper[4811]: I0219 10:14:33.335855 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f528e872-8fd6-4cfc-b4d5-129433fed8f6-utilities\") pod \"f528e872-8fd6-4cfc-b4d5-129433fed8f6\" (UID: \"f528e872-8fd6-4cfc-b4d5-129433fed8f6\") " Feb 19 10:14:33 crc kubenswrapper[4811]: I0219 10:14:33.336119 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f528e872-8fd6-4cfc-b4d5-129433fed8f6-catalog-content\") pod \"f528e872-8fd6-4cfc-b4d5-129433fed8f6\" (UID: \"f528e872-8fd6-4cfc-b4d5-129433fed8f6\") " Feb 19 10:14:33 crc kubenswrapper[4811]: I0219 10:14:33.336758 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f528e872-8fd6-4cfc-b4d5-129433fed8f6-utilities" (OuterVolumeSpecName: "utilities") pod "f528e872-8fd6-4cfc-b4d5-129433fed8f6" (UID: "f528e872-8fd6-4cfc-b4d5-129433fed8f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:14:33 crc kubenswrapper[4811]: I0219 10:14:33.345240 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f528e872-8fd6-4cfc-b4d5-129433fed8f6-kube-api-access-krrpn" (OuterVolumeSpecName: "kube-api-access-krrpn") pod "f528e872-8fd6-4cfc-b4d5-129433fed8f6" (UID: "f528e872-8fd6-4cfc-b4d5-129433fed8f6"). InnerVolumeSpecName "kube-api-access-krrpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:14:33 crc kubenswrapper[4811]: I0219 10:14:33.437380 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krrpn\" (UniqueName: \"kubernetes.io/projected/f528e872-8fd6-4cfc-b4d5-129433fed8f6-kube-api-access-krrpn\") on node \"crc\" DevicePath \"\"" Feb 19 10:14:33 crc kubenswrapper[4811]: I0219 10:14:33.437512 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f528e872-8fd6-4cfc-b4d5-129433fed8f6-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:14:33 crc kubenswrapper[4811]: I0219 10:14:33.470334 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f528e872-8fd6-4cfc-b4d5-129433fed8f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f528e872-8fd6-4cfc-b4d5-129433fed8f6" (UID: "f528e872-8fd6-4cfc-b4d5-129433fed8f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:14:33 crc kubenswrapper[4811]: I0219 10:14:33.538113 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f528e872-8fd6-4cfc-b4d5-129433fed8f6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:14:33 crc kubenswrapper[4811]: I0219 10:14:33.947635 4811 generic.go:334] "Generic (PLEG): container finished" podID="f528e872-8fd6-4cfc-b4d5-129433fed8f6" containerID="1dc338fcc9a40512662c9c8391a79b0376d704050253c58615d5515bb7dd4784" exitCode=0 Feb 19 10:14:33 crc kubenswrapper[4811]: I0219 10:14:33.947749 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zq68z" Feb 19 10:14:33 crc kubenswrapper[4811]: I0219 10:14:33.947696 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zq68z" event={"ID":"f528e872-8fd6-4cfc-b4d5-129433fed8f6","Type":"ContainerDied","Data":"1dc338fcc9a40512662c9c8391a79b0376d704050253c58615d5515bb7dd4784"} Feb 19 10:14:33 crc kubenswrapper[4811]: I0219 10:14:33.947940 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zq68z" event={"ID":"f528e872-8fd6-4cfc-b4d5-129433fed8f6","Type":"ContainerDied","Data":"5a4295896315f97ba8415b3fbb34f50d37ed4447bf46c218b23c1ec68efae21e"} Feb 19 10:14:33 crc kubenswrapper[4811]: I0219 10:14:33.947973 4811 scope.go:117] "RemoveContainer" containerID="1dc338fcc9a40512662c9c8391a79b0376d704050253c58615d5515bb7dd4784" Feb 19 10:14:33 crc kubenswrapper[4811]: I0219 10:14:33.977256 4811 scope.go:117] "RemoveContainer" containerID="c68c3044f2ceced560321a63f7b0726636ba096ba96b30f4c67d3e531d465757" Feb 19 10:14:33 crc kubenswrapper[4811]: I0219 10:14:33.992962 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zq68z"] Feb 19 10:14:33 crc kubenswrapper[4811]: I0219 10:14:33.998670 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zq68z"] Feb 19 10:14:34 crc kubenswrapper[4811]: I0219 10:14:34.004777 4811 scope.go:117] "RemoveContainer" containerID="3f9b91a97795eb2a16dc9b036061f7cd417a8a6870477d9703f04e46b76aa4cd" Feb 19 10:14:34 crc kubenswrapper[4811]: I0219 10:14:34.019200 4811 scope.go:117] "RemoveContainer" containerID="1dc338fcc9a40512662c9c8391a79b0376d704050253c58615d5515bb7dd4784" Feb 19 10:14:34 crc kubenswrapper[4811]: E0219 10:14:34.019612 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dc338fcc9a40512662c9c8391a79b0376d704050253c58615d5515bb7dd4784\": container with ID starting with 1dc338fcc9a40512662c9c8391a79b0376d704050253c58615d5515bb7dd4784 not found: ID does not exist" containerID="1dc338fcc9a40512662c9c8391a79b0376d704050253c58615d5515bb7dd4784" Feb 19 10:14:34 crc kubenswrapper[4811]: I0219 10:14:34.019656 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dc338fcc9a40512662c9c8391a79b0376d704050253c58615d5515bb7dd4784"} err="failed to get container status \"1dc338fcc9a40512662c9c8391a79b0376d704050253c58615d5515bb7dd4784\": rpc error: code = NotFound desc = could not find container \"1dc338fcc9a40512662c9c8391a79b0376d704050253c58615d5515bb7dd4784\": container with ID starting with 1dc338fcc9a40512662c9c8391a79b0376d704050253c58615d5515bb7dd4784 not found: ID does not exist" Feb 19 10:14:34 crc kubenswrapper[4811]: I0219 10:14:34.019686 4811 scope.go:117] "RemoveContainer" containerID="c68c3044f2ceced560321a63f7b0726636ba096ba96b30f4c67d3e531d465757" Feb 19 10:14:34 crc kubenswrapper[4811]: E0219 10:14:34.020125 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c68c3044f2ceced560321a63f7b0726636ba096ba96b30f4c67d3e531d465757\": container with ID starting with c68c3044f2ceced560321a63f7b0726636ba096ba96b30f4c67d3e531d465757 not found: ID does not exist" containerID="c68c3044f2ceced560321a63f7b0726636ba096ba96b30f4c67d3e531d465757" Feb 19 10:14:34 crc kubenswrapper[4811]: I0219 10:14:34.020157 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c68c3044f2ceced560321a63f7b0726636ba096ba96b30f4c67d3e531d465757"} err="failed to get container status \"c68c3044f2ceced560321a63f7b0726636ba096ba96b30f4c67d3e531d465757\": rpc error: code = NotFound desc = could not find container \"c68c3044f2ceced560321a63f7b0726636ba096ba96b30f4c67d3e531d465757\": container with ID starting with c68c3044f2ceced560321a63f7b0726636ba096ba96b30f4c67d3e531d465757 not found: ID does not exist" Feb 19 10:14:34 crc kubenswrapper[4811]: I0219 10:14:34.020183 4811 scope.go:117] "RemoveContainer" containerID="3f9b91a97795eb2a16dc9b036061f7cd417a8a6870477d9703f04e46b76aa4cd" Feb 19 10:14:34 crc kubenswrapper[4811]: E0219 10:14:34.020464 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f9b91a97795eb2a16dc9b036061f7cd417a8a6870477d9703f04e46b76aa4cd\": container with ID starting with 3f9b91a97795eb2a16dc9b036061f7cd417a8a6870477d9703f04e46b76aa4cd not found: ID does not exist" containerID="3f9b91a97795eb2a16dc9b036061f7cd417a8a6870477d9703f04e46b76aa4cd" Feb 19 10:14:34 crc kubenswrapper[4811]: I0219 10:14:34.020497 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f9b91a97795eb2a16dc9b036061f7cd417a8a6870477d9703f04e46b76aa4cd"} err="failed to get container status \"3f9b91a97795eb2a16dc9b036061f7cd417a8a6870477d9703f04e46b76aa4cd\": rpc error: code = NotFound desc = could not find container \"3f9b91a97795eb2a16dc9b036061f7cd417a8a6870477d9703f04e46b76aa4cd\": container with ID starting with 3f9b91a97795eb2a16dc9b036061f7cd417a8a6870477d9703f04e46b76aa4cd not found: ID does not exist" Feb 19 10:14:34 crc kubenswrapper[4811]: I0219 10:14:34.589643 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f528e872-8fd6-4cfc-b4d5-129433fed8f6" path="/var/lib/kubelet/pods/f528e872-8fd6-4cfc-b4d5-129433fed8f6/volumes" Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.278813 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pctn2"] Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.279892 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pctn2" podUID="006d82c1-543d-45da-8e4b-75a557df7959" containerName="registry-server" containerID="cri-o://8d7454b18cc4d065ea13d4c7c90ea3d70cc3dbb16194c8948a66fb376e4aeded" gracePeriod=30 Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.294886 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cpcsw"] Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.295312 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cpcsw" podUID="c2f288dc-ac27-4a76-84da-d3f4e2dccf77" containerName="registry-server" containerID="cri-o://df56ee74ba6c076976dd2fa78f864f9e2967e63a9160305688fb4d94f0a7b43d" gracePeriod=30 Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.304570 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mxhh7"] Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.304898 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-mxhh7" podUID="89c5cf4e-a301-48d2-8d31-056b6afc187b" containerName="marketplace-operator" containerID="cri-o://61bf5e3d5391b823a8e46dab50cd6af4b2b0de4ea3dd95bcaf4943bf0b0cd299" gracePeriod=30 Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.309830 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-79ql7"] Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.310140 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-79ql7" podUID="bad2100b-8b05-432c-96b8-19dbc3c5471c" containerName="registry-server" containerID="cri-o://a4c139e06cbdffabf77d30164993cffa9be2e0c99dad125ce7a9b8e919418374" gracePeriod=30 Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.322691 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7lcg9"] Feb 19 10:14:44 crc kubenswrapper[4811]: E0219 10:14:44.323036 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f0825a4-4e84-49d7-929e-165ced3ed2f8" containerName="extract-content" Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.323056 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f0825a4-4e84-49d7-929e-165ced3ed2f8" containerName="extract-content" Feb 19 10:14:44 crc kubenswrapper[4811]: E0219 10:14:44.323072 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f528e872-8fd6-4cfc-b4d5-129433fed8f6" containerName="extract-content" Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.323080 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f528e872-8fd6-4cfc-b4d5-129433fed8f6" containerName="extract-content" Feb 19 10:14:44 crc kubenswrapper[4811]: E0219 10:14:44.323094 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f0825a4-4e84-49d7-929e-165ced3ed2f8" containerName="registry-server" Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.323103 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f0825a4-4e84-49d7-929e-165ced3ed2f8" containerName="registry-server" Feb 19 10:14:44 crc kubenswrapper[4811]: E0219 10:14:44.323124 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f528e872-8fd6-4cfc-b4d5-129433fed8f6" containerName="extract-utilities" Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.323131 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f528e872-8fd6-4cfc-b4d5-129433fed8f6" containerName="extract-utilities" Feb 19 10:14:44 crc kubenswrapper[4811]: E0219 10:14:44.323142 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f528e872-8fd6-4cfc-b4d5-129433fed8f6" containerName="registry-server" Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.323149 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f528e872-8fd6-4cfc-b4d5-129433fed8f6" containerName="registry-server" Feb 19 10:14:44 crc kubenswrapper[4811]: E0219 10:14:44.323160 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f0825a4-4e84-49d7-929e-165ced3ed2f8" containerName="extract-utilities" Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.323170 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f0825a4-4e84-49d7-929e-165ced3ed2f8" containerName="extract-utilities" Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.323294 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f528e872-8fd6-4cfc-b4d5-129433fed8f6" containerName="registry-server" Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.323311 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f0825a4-4e84-49d7-929e-165ced3ed2f8" containerName="registry-server" Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.323853 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7lcg9" Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.345147 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5kfzw"] Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.345422 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5kfzw" podUID="fc1f2b4f-13cd-486c-a09a-14c17b7e2c86" containerName="registry-server" containerID="cri-o://09b9d7e23b5d5875e0568c9387b3c34b76415f3d3dd67523d3e1645f4b2595a3" gracePeriod=30 Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.351751 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7lcg9"] Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.494392 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5fd28c56-84ca-4b57-af19-9c11dc95b397-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7lcg9\" (UID: \"5fd28c56-84ca-4b57-af19-9c11dc95b397\") " pod="openshift-marketplace/marketplace-operator-79b997595-7lcg9" Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.494785 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5fd28c56-84ca-4b57-af19-9c11dc95b397-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7lcg9\" (UID: \"5fd28c56-84ca-4b57-af19-9c11dc95b397\") " pod="openshift-marketplace/marketplace-operator-79b997595-7lcg9" Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.494815 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgxg9\" (UniqueName: \"kubernetes.io/projected/5fd28c56-84ca-4b57-af19-9c11dc95b397-kube-api-access-jgxg9\") pod \"marketplace-operator-79b997595-7lcg9\" (UID: \"5fd28c56-84ca-4b57-af19-9c11dc95b397\") " pod="openshift-marketplace/marketplace-operator-79b997595-7lcg9" Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.595934 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5fd28c56-84ca-4b57-af19-9c11dc95b397-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7lcg9\" (UID: \"5fd28c56-84ca-4b57-af19-9c11dc95b397\") " pod="openshift-marketplace/marketplace-operator-79b997595-7lcg9" Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.596030 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5fd28c56-84ca-4b57-af19-9c11dc95b397-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7lcg9\" (UID: \"5fd28c56-84ca-4b57-af19-9c11dc95b397\") " pod="openshift-marketplace/marketplace-operator-79b997595-7lcg9" Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.596061 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgxg9\" (UniqueName: \"kubernetes.io/projected/5fd28c56-84ca-4b57-af19-9c11dc95b397-kube-api-access-jgxg9\") pod \"marketplace-operator-79b997595-7lcg9\" (UID: \"5fd28c56-84ca-4b57-af19-9c11dc95b397\") " pod="openshift-marketplace/marketplace-operator-79b997595-7lcg9" Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.598747 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5fd28c56-84ca-4b57-af19-9c11dc95b397-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7lcg9\" (UID: \"5fd28c56-84ca-4b57-af19-9c11dc95b397\") " pod="openshift-marketplace/marketplace-operator-79b997595-7lcg9" Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.602709 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5fd28c56-84ca-4b57-af19-9c11dc95b397-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7lcg9\" (UID: \"5fd28c56-84ca-4b57-af19-9c11dc95b397\") " pod="openshift-marketplace/marketplace-operator-79b997595-7lcg9" Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.616013 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgxg9\" (UniqueName: \"kubernetes.io/projected/5fd28c56-84ca-4b57-af19-9c11dc95b397-kube-api-access-jgxg9\") pod \"marketplace-operator-79b997595-7lcg9\" (UID: \"5fd28c56-84ca-4b57-af19-9c11dc95b397\") " pod="openshift-marketplace/marketplace-operator-79b997595-7lcg9" Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.649560 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7lcg9" Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.760767 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cpcsw" Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.834728 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pctn2" Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.838663 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5kfzw" Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.845553 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-79ql7" Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.865757 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mxhh7" Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.906592 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/006d82c1-543d-45da-8e4b-75a557df7959-utilities\") pod \"006d82c1-543d-45da-8e4b-75a557df7959\" (UID: \"006d82c1-543d-45da-8e4b-75a557df7959\") " Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.906626 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/006d82c1-543d-45da-8e4b-75a557df7959-catalog-content\") pod \"006d82c1-543d-45da-8e4b-75a557df7959\" (UID: \"006d82c1-543d-45da-8e4b-75a557df7959\") " Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.906656 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmb2k\" (UniqueName: \"kubernetes.io/projected/fc1f2b4f-13cd-486c-a09a-14c17b7e2c86-kube-api-access-vmb2k\") pod \"fc1f2b4f-13cd-486c-a09a-14c17b7e2c86\" (UID: \"fc1f2b4f-13cd-486c-a09a-14c17b7e2c86\") " Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.906680 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc1f2b4f-13cd-486c-a09a-14c17b7e2c86-catalog-content\") pod \"fc1f2b4f-13cd-486c-a09a-14c17b7e2c86\" (UID: \"fc1f2b4f-13cd-486c-a09a-14c17b7e2c86\") " Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.906730 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/89c5cf4e-a301-48d2-8d31-056b6afc187b-marketplace-operator-metrics\") pod \"89c5cf4e-a301-48d2-8d31-056b6afc187b\" (UID: \"89c5cf4e-a301-48d2-8d31-056b6afc187b\") " Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.906757 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2f288dc-ac27-4a76-84da-d3f4e2dccf77-utilities\") pod \"c2f288dc-ac27-4a76-84da-d3f4e2dccf77\" (UID: \"c2f288dc-ac27-4a76-84da-d3f4e2dccf77\") " Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.906779 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bad2100b-8b05-432c-96b8-19dbc3c5471c-utilities\") pod \"bad2100b-8b05-432c-96b8-19dbc3c5471c\" (UID: \"bad2100b-8b05-432c-96b8-19dbc3c5471c\") " Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.906802 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89c5cf4e-a301-48d2-8d31-056b6afc187b-marketplace-trusted-ca\") pod \"89c5cf4e-a301-48d2-8d31-056b6afc187b\" (UID: \"89c5cf4e-a301-48d2-8d31-056b6afc187b\") " Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.906866 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wf6f\" (UniqueName: \"kubernetes.io/projected/c2f288dc-ac27-4a76-84da-d3f4e2dccf77-kube-api-access-7wf6f\") pod \"c2f288dc-ac27-4a76-84da-d3f4e2dccf77\" (UID: \"c2f288dc-ac27-4a76-84da-d3f4e2dccf77\") " Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.906895 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc1f2b4f-13cd-486c-a09a-14c17b7e2c86-utilities\") pod \"fc1f2b4f-13cd-486c-a09a-14c17b7e2c86\" (UID: \"fc1f2b4f-13cd-486c-a09a-14c17b7e2c86\") " Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.906926 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79jzz\" (UniqueName: \"kubernetes.io/projected/bad2100b-8b05-432c-96b8-19dbc3c5471c-kube-api-access-79jzz\") pod \"bad2100b-8b05-432c-96b8-19dbc3c5471c\" (UID: \"bad2100b-8b05-432c-96b8-19dbc3c5471c\") " Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.906969 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bad2100b-8b05-432c-96b8-19dbc3c5471c-catalog-content\") pod \"bad2100b-8b05-432c-96b8-19dbc3c5471c\" (UID: \"bad2100b-8b05-432c-96b8-19dbc3c5471c\") " Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.906989 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tldzc\" (UniqueName: \"kubernetes.io/projected/89c5cf4e-a301-48d2-8d31-056b6afc187b-kube-api-access-tldzc\") pod \"89c5cf4e-a301-48d2-8d31-056b6afc187b\" (UID: \"89c5cf4e-a301-48d2-8d31-056b6afc187b\") " Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.907018 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2f288dc-ac27-4a76-84da-d3f4e2dccf77-catalog-content\") pod \"c2f288dc-ac27-4a76-84da-d3f4e2dccf77\" (UID: \"c2f288dc-ac27-4a76-84da-d3f4e2dccf77\") " Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.907042 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9slxr\" (UniqueName: \"kubernetes.io/projected/006d82c1-543d-45da-8e4b-75a557df7959-kube-api-access-9slxr\") pod \"006d82c1-543d-45da-8e4b-75a557df7959\" (UID: \"006d82c1-543d-45da-8e4b-75a557df7959\") " Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.908370 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bad2100b-8b05-432c-96b8-19dbc3c5471c-utilities" (OuterVolumeSpecName: "utilities") pod "bad2100b-8b05-432c-96b8-19dbc3c5471c" (UID: "bad2100b-8b05-432c-96b8-19dbc3c5471c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.909393 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/006d82c1-543d-45da-8e4b-75a557df7959-utilities" (OuterVolumeSpecName: "utilities") pod "006d82c1-543d-45da-8e4b-75a557df7959" (UID: "006d82c1-543d-45da-8e4b-75a557df7959"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.910158 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89c5cf4e-a301-48d2-8d31-056b6afc187b-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "89c5cf4e-a301-48d2-8d31-056b6afc187b" (UID: "89c5cf4e-a301-48d2-8d31-056b6afc187b"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.911159 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc1f2b4f-13cd-486c-a09a-14c17b7e2c86-utilities" (OuterVolumeSpecName: "utilities") pod "fc1f2b4f-13cd-486c-a09a-14c17b7e2c86" (UID: "fc1f2b4f-13cd-486c-a09a-14c17b7e2c86"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.912258 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2f288dc-ac27-4a76-84da-d3f4e2dccf77-utilities" (OuterVolumeSpecName: "utilities") pod "c2f288dc-ac27-4a76-84da-d3f4e2dccf77" (UID: "c2f288dc-ac27-4a76-84da-d3f4e2dccf77"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.913831 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc1f2b4f-13cd-486c-a09a-14c17b7e2c86-kube-api-access-vmb2k" (OuterVolumeSpecName: "kube-api-access-vmb2k") pod "fc1f2b4f-13cd-486c-a09a-14c17b7e2c86" (UID: "fc1f2b4f-13cd-486c-a09a-14c17b7e2c86"). InnerVolumeSpecName "kube-api-access-vmb2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.919054 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bad2100b-8b05-432c-96b8-19dbc3c5471c-kube-api-access-79jzz" (OuterVolumeSpecName: "kube-api-access-79jzz") pod "bad2100b-8b05-432c-96b8-19dbc3c5471c" (UID: "bad2100b-8b05-432c-96b8-19dbc3c5471c"). InnerVolumeSpecName "kube-api-access-79jzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.919334 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2f288dc-ac27-4a76-84da-d3f4e2dccf77-kube-api-access-7wf6f" (OuterVolumeSpecName: "kube-api-access-7wf6f") pod "c2f288dc-ac27-4a76-84da-d3f4e2dccf77" (UID: "c2f288dc-ac27-4a76-84da-d3f4e2dccf77"). InnerVolumeSpecName "kube-api-access-7wf6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.919650 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89c5cf4e-a301-48d2-8d31-056b6afc187b-kube-api-access-tldzc" (OuterVolumeSpecName: "kube-api-access-tldzc") pod "89c5cf4e-a301-48d2-8d31-056b6afc187b" (UID: "89c5cf4e-a301-48d2-8d31-056b6afc187b"). InnerVolumeSpecName "kube-api-access-tldzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.919852 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89c5cf4e-a301-48d2-8d31-056b6afc187b-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "89c5cf4e-a301-48d2-8d31-056b6afc187b" (UID: "89c5cf4e-a301-48d2-8d31-056b6afc187b"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.930745 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/006d82c1-543d-45da-8e4b-75a557df7959-kube-api-access-9slxr" (OuterVolumeSpecName: "kube-api-access-9slxr") pod "006d82c1-543d-45da-8e4b-75a557df7959" (UID: "006d82c1-543d-45da-8e4b-75a557df7959"). InnerVolumeSpecName "kube-api-access-9slxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.943667 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bad2100b-8b05-432c-96b8-19dbc3c5471c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bad2100b-8b05-432c-96b8-19dbc3c5471c" (UID: "bad2100b-8b05-432c-96b8-19dbc3c5471c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:14:44 crc kubenswrapper[4811]: I0219 10:14:44.975079 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/006d82c1-543d-45da-8e4b-75a557df7959-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "006d82c1-543d-45da-8e4b-75a557df7959" (UID: "006d82c1-543d-45da-8e4b-75a557df7959"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.003239 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2f288dc-ac27-4a76-84da-d3f4e2dccf77-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c2f288dc-ac27-4a76-84da-d3f4e2dccf77" (UID: "c2f288dc-ac27-4a76-84da-d3f4e2dccf77"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.008529 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wf6f\" (UniqueName: \"kubernetes.io/projected/c2f288dc-ac27-4a76-84da-d3f4e2dccf77-kube-api-access-7wf6f\") on node \"crc\" DevicePath \"\"" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.008610 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc1f2b4f-13cd-486c-a09a-14c17b7e2c86-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.008625 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79jzz\" (UniqueName: \"kubernetes.io/projected/bad2100b-8b05-432c-96b8-19dbc3c5471c-kube-api-access-79jzz\") on node \"crc\" DevicePath \"\"" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.008637 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bad2100b-8b05-432c-96b8-19dbc3c5471c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.008649 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tldzc\" (UniqueName: \"kubernetes.io/projected/89c5cf4e-a301-48d2-8d31-056b6afc187b-kube-api-access-tldzc\") on node \"crc\" DevicePath \"\"" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.008661 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2f288dc-ac27-4a76-84da-d3f4e2dccf77-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.008682 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9slxr\" (UniqueName: \"kubernetes.io/projected/006d82c1-543d-45da-8e4b-75a557df7959-kube-api-access-9slxr\") on node \"crc\" DevicePath \"\"" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.008697 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/006d82c1-543d-45da-8e4b-75a557df7959-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.008708 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/006d82c1-543d-45da-8e4b-75a557df7959-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.008720 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmb2k\" (UniqueName: \"kubernetes.io/projected/fc1f2b4f-13cd-486c-a09a-14c17b7e2c86-kube-api-access-vmb2k\") on node \"crc\" DevicePath \"\"" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.008734 4811 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/89c5cf4e-a301-48d2-8d31-056b6afc187b-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.008746 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2f288dc-ac27-4a76-84da-d3f4e2dccf77-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.008754 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bad2100b-8b05-432c-96b8-19dbc3c5471c-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.008767 4811 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89c5cf4e-a301-48d2-8d31-056b6afc187b-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.020654 4811 generic.go:334] "Generic (PLEG): container finished" podID="fc1f2b4f-13cd-486c-a09a-14c17b7e2c86" containerID="09b9d7e23b5d5875e0568c9387b3c34b76415f3d3dd67523d3e1645f4b2595a3" exitCode=0 Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.020756 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5kfzw" event={"ID":"fc1f2b4f-13cd-486c-a09a-14c17b7e2c86","Type":"ContainerDied","Data":"09b9d7e23b5d5875e0568c9387b3c34b76415f3d3dd67523d3e1645f4b2595a3"} Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.020822 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5kfzw" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.020848 4811 scope.go:117] "RemoveContainer" containerID="09b9d7e23b5d5875e0568c9387b3c34b76415f3d3dd67523d3e1645f4b2595a3" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.020833 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5kfzw" event={"ID":"fc1f2b4f-13cd-486c-a09a-14c17b7e2c86","Type":"ContainerDied","Data":"74cb6819d01e888fed097ffe09a93b01a2f5255f0c3aa6fc47a990079771b243"} Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.023792 4811 generic.go:334] "Generic (PLEG): container finished" podID="006d82c1-543d-45da-8e4b-75a557df7959" containerID="8d7454b18cc4d065ea13d4c7c90ea3d70cc3dbb16194c8948a66fb376e4aeded" exitCode=0 Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.023875 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pctn2" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.023885 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pctn2" event={"ID":"006d82c1-543d-45da-8e4b-75a557df7959","Type":"ContainerDied","Data":"8d7454b18cc4d065ea13d4c7c90ea3d70cc3dbb16194c8948a66fb376e4aeded"} Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.023926 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pctn2" event={"ID":"006d82c1-543d-45da-8e4b-75a557df7959","Type":"ContainerDied","Data":"6cac65382904eaaa4b2f739b36cd51fc1881c69291b0ef5a9d2102b775ea87ac"} Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.026357 4811 generic.go:334] "Generic (PLEG): container finished" podID="89c5cf4e-a301-48d2-8d31-056b6afc187b" containerID="61bf5e3d5391b823a8e46dab50cd6af4b2b0de4ea3dd95bcaf4943bf0b0cd299" exitCode=0 Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.026404 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mxhh7" event={"ID":"89c5cf4e-a301-48d2-8d31-056b6afc187b","Type":"ContainerDied","Data":"61bf5e3d5391b823a8e46dab50cd6af4b2b0de4ea3dd95bcaf4943bf0b0cd299"} Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.026424 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mxhh7" event={"ID":"89c5cf4e-a301-48d2-8d31-056b6afc187b","Type":"ContainerDied","Data":"1420797e62ca4bdef3798fa3ce5a39a66c724b426135e0aa1b3d100cd4fccf26"} Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.026784 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mxhh7" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.030582 4811 generic.go:334] "Generic (PLEG): container finished" podID="c2f288dc-ac27-4a76-84da-d3f4e2dccf77" containerID="df56ee74ba6c076976dd2fa78f864f9e2967e63a9160305688fb4d94f0a7b43d" exitCode=0 Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.030658 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cpcsw" event={"ID":"c2f288dc-ac27-4a76-84da-d3f4e2dccf77","Type":"ContainerDied","Data":"df56ee74ba6c076976dd2fa78f864f9e2967e63a9160305688fb4d94f0a7b43d"} Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.030690 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cpcsw" event={"ID":"c2f288dc-ac27-4a76-84da-d3f4e2dccf77","Type":"ContainerDied","Data":"086e0a634459f0571db3040f003db99438a3ddd37f6937a139da5c160f3a14ed"} Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.030906 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cpcsw" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.034666 4811 generic.go:334] "Generic (PLEG): container finished" podID="bad2100b-8b05-432c-96b8-19dbc3c5471c" containerID="a4c139e06cbdffabf77d30164993cffa9be2e0c99dad125ce7a9b8e919418374" exitCode=0 Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.034713 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-79ql7" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.034721 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-79ql7" event={"ID":"bad2100b-8b05-432c-96b8-19dbc3c5471c","Type":"ContainerDied","Data":"a4c139e06cbdffabf77d30164993cffa9be2e0c99dad125ce7a9b8e919418374"} Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.034759 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-79ql7" event={"ID":"bad2100b-8b05-432c-96b8-19dbc3c5471c","Type":"ContainerDied","Data":"15c632ccddd5c86c8b2274253a5eb66c0f76250c95da588c3f3763fed6ee99dc"} Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.051673 4811 scope.go:117] "RemoveContainer" containerID="46ef71d764192f2f5463681393644b3aa23acd152468ef438f849a8eba671d54" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.062252 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pctn2"] Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.065526 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pctn2"] Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.080786 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc1f2b4f-13cd-486c-a09a-14c17b7e2c86-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc1f2b4f-13cd-486c-a09a-14c17b7e2c86" (UID: "fc1f2b4f-13cd-486c-a09a-14c17b7e2c86"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.086632 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-79ql7"] Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.088356 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-79ql7"] Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.093437 4811 scope.go:117] "RemoveContainer" containerID="ff689b1038b3f3f15850c484c9e370ac4cbb811c54b261b0d7c2b13cefc35914" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.096720 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mxhh7"] Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.101287 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mxhh7"] Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.110779 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc1f2b4f-13cd-486c-a09a-14c17b7e2c86-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.112262 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cpcsw"] Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.114818 4811 scope.go:117] "RemoveContainer" containerID="09b9d7e23b5d5875e0568c9387b3c34b76415f3d3dd67523d3e1645f4b2595a3" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.115271 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cpcsw"] Feb 19 10:14:45 crc kubenswrapper[4811]: E0219 10:14:45.115437 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09b9d7e23b5d5875e0568c9387b3c34b76415f3d3dd67523d3e1645f4b2595a3\": container with ID starting with 09b9d7e23b5d5875e0568c9387b3c34b76415f3d3dd67523d3e1645f4b2595a3 not found: ID does not exist" containerID="09b9d7e23b5d5875e0568c9387b3c34b76415f3d3dd67523d3e1645f4b2595a3" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.115505 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09b9d7e23b5d5875e0568c9387b3c34b76415f3d3dd67523d3e1645f4b2595a3"} err="failed to get container status \"09b9d7e23b5d5875e0568c9387b3c34b76415f3d3dd67523d3e1645f4b2595a3\": rpc error: code = NotFound desc = could not find container \"09b9d7e23b5d5875e0568c9387b3c34b76415f3d3dd67523d3e1645f4b2595a3\": container with ID starting with 09b9d7e23b5d5875e0568c9387b3c34b76415f3d3dd67523d3e1645f4b2595a3 not found: ID does not exist" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.115544 4811 scope.go:117] "RemoveContainer" containerID="46ef71d764192f2f5463681393644b3aa23acd152468ef438f849a8eba671d54" Feb 19 10:14:45 crc kubenswrapper[4811]: E0219 10:14:45.116089 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46ef71d764192f2f5463681393644b3aa23acd152468ef438f849a8eba671d54\": container with ID starting with 46ef71d764192f2f5463681393644b3aa23acd152468ef438f849a8eba671d54 not found: ID does not exist" containerID="46ef71d764192f2f5463681393644b3aa23acd152468ef438f849a8eba671d54" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.116125 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46ef71d764192f2f5463681393644b3aa23acd152468ef438f849a8eba671d54"} err="failed to get container status \"46ef71d764192f2f5463681393644b3aa23acd152468ef438f849a8eba671d54\": rpc error: code = NotFound desc = could not find container \"46ef71d764192f2f5463681393644b3aa23acd152468ef438f849a8eba671d54\": container with ID starting with 46ef71d764192f2f5463681393644b3aa23acd152468ef438f849a8eba671d54 not found: ID does not exist" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.116158 4811 scope.go:117] "RemoveContainer" containerID="ff689b1038b3f3f15850c484c9e370ac4cbb811c54b261b0d7c2b13cefc35914" Feb 19 10:14:45 crc kubenswrapper[4811]: E0219 10:14:45.116706 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff689b1038b3f3f15850c484c9e370ac4cbb811c54b261b0d7c2b13cefc35914\": container with ID starting with ff689b1038b3f3f15850c484c9e370ac4cbb811c54b261b0d7c2b13cefc35914 not found: ID does not exist" containerID="ff689b1038b3f3f15850c484c9e370ac4cbb811c54b261b0d7c2b13cefc35914" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.116743 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff689b1038b3f3f15850c484c9e370ac4cbb811c54b261b0d7c2b13cefc35914"} err="failed to get container status \"ff689b1038b3f3f15850c484c9e370ac4cbb811c54b261b0d7c2b13cefc35914\": rpc error: code = NotFound desc = could not find container \"ff689b1038b3f3f15850c484c9e370ac4cbb811c54b261b0d7c2b13cefc35914\": container with ID starting with ff689b1038b3f3f15850c484c9e370ac4cbb811c54b261b0d7c2b13cefc35914 not found: ID does not exist" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.116776 4811 scope.go:117] "RemoveContainer" containerID="8d7454b18cc4d065ea13d4c7c90ea3d70cc3dbb16194c8948a66fb376e4aeded" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.133279 4811 scope.go:117] "RemoveContainer" containerID="c667cf10ca24075d80e90f6b7f7a8a98915541a064ec0591822211c7de6565f6" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.152268 4811 scope.go:117] "RemoveContainer" containerID="f7477d924d9b5c7ffc2de6a0d4b96875b781cf5ceea61ded5c9fdcb309c31985" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.202100 4811 scope.go:117] "RemoveContainer" containerID="8d7454b18cc4d065ea13d4c7c90ea3d70cc3dbb16194c8948a66fb376e4aeded" Feb 19 10:14:45 crc kubenswrapper[4811]: E0219 10:14:45.202624 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d7454b18cc4d065ea13d4c7c90ea3d70cc3dbb16194c8948a66fb376e4aeded\": container with ID starting with 8d7454b18cc4d065ea13d4c7c90ea3d70cc3dbb16194c8948a66fb376e4aeded not found: ID does not exist" containerID="8d7454b18cc4d065ea13d4c7c90ea3d70cc3dbb16194c8948a66fb376e4aeded" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.202662 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d7454b18cc4d065ea13d4c7c90ea3d70cc3dbb16194c8948a66fb376e4aeded"} err="failed to get container status \"8d7454b18cc4d065ea13d4c7c90ea3d70cc3dbb16194c8948a66fb376e4aeded\": rpc error: code = NotFound desc = could not find container \"8d7454b18cc4d065ea13d4c7c90ea3d70cc3dbb16194c8948a66fb376e4aeded\": container with ID starting with 8d7454b18cc4d065ea13d4c7c90ea3d70cc3dbb16194c8948a66fb376e4aeded not found: ID does not exist" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.202692 4811 scope.go:117] "RemoveContainer" containerID="c667cf10ca24075d80e90f6b7f7a8a98915541a064ec0591822211c7de6565f6" Feb 19 10:14:45 crc kubenswrapper[4811]: E0219 10:14:45.206921 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c667cf10ca24075d80e90f6b7f7a8a98915541a064ec0591822211c7de6565f6\": container with ID starting with c667cf10ca24075d80e90f6b7f7a8a98915541a064ec0591822211c7de6565f6 not found: ID does not exist" containerID="c667cf10ca24075d80e90f6b7f7a8a98915541a064ec0591822211c7de6565f6" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.206973 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c667cf10ca24075d80e90f6b7f7a8a98915541a064ec0591822211c7de6565f6"} err="failed to get container status \"c667cf10ca24075d80e90f6b7f7a8a98915541a064ec0591822211c7de6565f6\": rpc error: code = NotFound desc = could not find container \"c667cf10ca24075d80e90f6b7f7a8a98915541a064ec0591822211c7de6565f6\": container with ID starting with c667cf10ca24075d80e90f6b7f7a8a98915541a064ec0591822211c7de6565f6 not found: ID does not exist" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.207012 4811 scope.go:117] "RemoveContainer" containerID="f7477d924d9b5c7ffc2de6a0d4b96875b781cf5ceea61ded5c9fdcb309c31985" Feb 19 10:14:45 crc kubenswrapper[4811]: E0219 10:14:45.209368 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7477d924d9b5c7ffc2de6a0d4b96875b781cf5ceea61ded5c9fdcb309c31985\": container with ID starting with f7477d924d9b5c7ffc2de6a0d4b96875b781cf5ceea61ded5c9fdcb309c31985 not found: ID does not exist" containerID="f7477d924d9b5c7ffc2de6a0d4b96875b781cf5ceea61ded5c9fdcb309c31985" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.209465 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7477d924d9b5c7ffc2de6a0d4b96875b781cf5ceea61ded5c9fdcb309c31985"} err="failed to get container status \"f7477d924d9b5c7ffc2de6a0d4b96875b781cf5ceea61ded5c9fdcb309c31985\": rpc error: code = NotFound desc = could not find container \"f7477d924d9b5c7ffc2de6a0d4b96875b781cf5ceea61ded5c9fdcb309c31985\": container with ID starting with f7477d924d9b5c7ffc2de6a0d4b96875b781cf5ceea61ded5c9fdcb309c31985 not found: ID does not exist" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.209519 4811 scope.go:117] "RemoveContainer" containerID="61bf5e3d5391b823a8e46dab50cd6af4b2b0de4ea3dd95bcaf4943bf0b0cd299" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.232804 4811 scope.go:117] "RemoveContainer" containerID="61bf5e3d5391b823a8e46dab50cd6af4b2b0de4ea3dd95bcaf4943bf0b0cd299" Feb 19 10:14:45 crc kubenswrapper[4811]: E0219 10:14:45.233661 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61bf5e3d5391b823a8e46dab50cd6af4b2b0de4ea3dd95bcaf4943bf0b0cd299\": container with ID starting with 61bf5e3d5391b823a8e46dab50cd6af4b2b0de4ea3dd95bcaf4943bf0b0cd299 not found: ID does not exist" containerID="61bf5e3d5391b823a8e46dab50cd6af4b2b0de4ea3dd95bcaf4943bf0b0cd299" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.233762 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61bf5e3d5391b823a8e46dab50cd6af4b2b0de4ea3dd95bcaf4943bf0b0cd299"} err="failed to get container status \"61bf5e3d5391b823a8e46dab50cd6af4b2b0de4ea3dd95bcaf4943bf0b0cd299\": rpc error: code = NotFound desc = could not find container \"61bf5e3d5391b823a8e46dab50cd6af4b2b0de4ea3dd95bcaf4943bf0b0cd299\": container with ID starting with 61bf5e3d5391b823a8e46dab50cd6af4b2b0de4ea3dd95bcaf4943bf0b0cd299 not found: ID does not exist" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.233853 4811 scope.go:117] "RemoveContainer" containerID="df56ee74ba6c076976dd2fa78f864f9e2967e63a9160305688fb4d94f0a7b43d" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.250792 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7lcg9"] Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.258367 4811 scope.go:117] "RemoveContainer" containerID="65dec841c08ba888fe305c3a8d7a9464fc30a62b702aca301f6fa65b7d5c9eeb" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.281748 4811 scope.go:117] "RemoveContainer" containerID="75dcf83f5a9e9158ccf7501167a9e07bdc17a950bdd08eaba7e6e88b5d1ae511" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.303230 4811 scope.go:117] "RemoveContainer" containerID="df56ee74ba6c076976dd2fa78f864f9e2967e63a9160305688fb4d94f0a7b43d" Feb 19 10:14:45 crc kubenswrapper[4811]: E0219 10:14:45.304019 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df56ee74ba6c076976dd2fa78f864f9e2967e63a9160305688fb4d94f0a7b43d\": container with ID starting with df56ee74ba6c076976dd2fa78f864f9e2967e63a9160305688fb4d94f0a7b43d not found: ID does not exist" containerID="df56ee74ba6c076976dd2fa78f864f9e2967e63a9160305688fb4d94f0a7b43d" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.304154 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df56ee74ba6c076976dd2fa78f864f9e2967e63a9160305688fb4d94f0a7b43d"} err="failed to get container status \"df56ee74ba6c076976dd2fa78f864f9e2967e63a9160305688fb4d94f0a7b43d\": rpc error: code = NotFound desc = could not find container \"df56ee74ba6c076976dd2fa78f864f9e2967e63a9160305688fb4d94f0a7b43d\": container with ID starting with df56ee74ba6c076976dd2fa78f864f9e2967e63a9160305688fb4d94f0a7b43d not found: ID does not exist" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.304283 4811 scope.go:117] "RemoveContainer" containerID="65dec841c08ba888fe305c3a8d7a9464fc30a62b702aca301f6fa65b7d5c9eeb" Feb 19 10:14:45 crc kubenswrapper[4811]: E0219 10:14:45.305015 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65dec841c08ba888fe305c3a8d7a9464fc30a62b702aca301f6fa65b7d5c9eeb\": container with ID starting with 65dec841c08ba888fe305c3a8d7a9464fc30a62b702aca301f6fa65b7d5c9eeb not found: ID does not exist" containerID="65dec841c08ba888fe305c3a8d7a9464fc30a62b702aca301f6fa65b7d5c9eeb" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.305085 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65dec841c08ba888fe305c3a8d7a9464fc30a62b702aca301f6fa65b7d5c9eeb"} err="failed to get container status \"65dec841c08ba888fe305c3a8d7a9464fc30a62b702aca301f6fa65b7d5c9eeb\": rpc error: code = NotFound desc = could not find container \"65dec841c08ba888fe305c3a8d7a9464fc30a62b702aca301f6fa65b7d5c9eeb\": container with ID starting with 65dec841c08ba888fe305c3a8d7a9464fc30a62b702aca301f6fa65b7d5c9eeb not found: ID does not exist" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.305130 4811 scope.go:117] "RemoveContainer" containerID="75dcf83f5a9e9158ccf7501167a9e07bdc17a950bdd08eaba7e6e88b5d1ae511" Feb 19 10:14:45 crc kubenswrapper[4811]: E0219 10:14:45.305525 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75dcf83f5a9e9158ccf7501167a9e07bdc17a950bdd08eaba7e6e88b5d1ae511\": container with ID starting with 75dcf83f5a9e9158ccf7501167a9e07bdc17a950bdd08eaba7e6e88b5d1ae511 not found: ID does not exist" containerID="75dcf83f5a9e9158ccf7501167a9e07bdc17a950bdd08eaba7e6e88b5d1ae511" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.305658 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75dcf83f5a9e9158ccf7501167a9e07bdc17a950bdd08eaba7e6e88b5d1ae511"} err="failed to get container status \"75dcf83f5a9e9158ccf7501167a9e07bdc17a950bdd08eaba7e6e88b5d1ae511\": rpc error: code = NotFound desc = could not find container \"75dcf83f5a9e9158ccf7501167a9e07bdc17a950bdd08eaba7e6e88b5d1ae511\": container with ID starting with 75dcf83f5a9e9158ccf7501167a9e07bdc17a950bdd08eaba7e6e88b5d1ae511 not found: ID does not exist" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.305796 4811 scope.go:117] "RemoveContainer" containerID="a4c139e06cbdffabf77d30164993cffa9be2e0c99dad125ce7a9b8e919418374" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.361521 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5kfzw"] Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.365584 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5kfzw"] Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.365746 4811 scope.go:117] "RemoveContainer" containerID="f32c4d2722e2fb8420ad930793cc6d8c35d46b4ebc02c1f596085ef118349e58" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.389207 4811 scope.go:117] "RemoveContainer" containerID="cdb14a6f78dc08f35d0277d3021badf851969040178e46ee5f71687e8a7f8d41" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.409272 4811 scope.go:117] "RemoveContainer" containerID="a4c139e06cbdffabf77d30164993cffa9be2e0c99dad125ce7a9b8e919418374" Feb 19 10:14:45 crc kubenswrapper[4811]: E0219 10:14:45.411393 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4c139e06cbdffabf77d30164993cffa9be2e0c99dad125ce7a9b8e919418374\": container with ID starting with a4c139e06cbdffabf77d30164993cffa9be2e0c99dad125ce7a9b8e919418374 not found: ID does not exist" containerID="a4c139e06cbdffabf77d30164993cffa9be2e0c99dad125ce7a9b8e919418374" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.411517 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4c139e06cbdffabf77d30164993cffa9be2e0c99dad125ce7a9b8e919418374"} err="failed to get container status \"a4c139e06cbdffabf77d30164993cffa9be2e0c99dad125ce7a9b8e919418374\": rpc error: code = NotFound desc = could not find container \"a4c139e06cbdffabf77d30164993cffa9be2e0c99dad125ce7a9b8e919418374\": container with ID starting with a4c139e06cbdffabf77d30164993cffa9be2e0c99dad125ce7a9b8e919418374 not found: ID does not exist" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.411592 4811 scope.go:117] "RemoveContainer" containerID="f32c4d2722e2fb8420ad930793cc6d8c35d46b4ebc02c1f596085ef118349e58" Feb 19 10:14:45 crc kubenswrapper[4811]: E0219 10:14:45.412545 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f32c4d2722e2fb8420ad930793cc6d8c35d46b4ebc02c1f596085ef118349e58\": container with ID starting with f32c4d2722e2fb8420ad930793cc6d8c35d46b4ebc02c1f596085ef118349e58 not found: ID does not exist" containerID="f32c4d2722e2fb8420ad930793cc6d8c35d46b4ebc02c1f596085ef118349e58" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.412625 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f32c4d2722e2fb8420ad930793cc6d8c35d46b4ebc02c1f596085ef118349e58"} err="failed to get container status \"f32c4d2722e2fb8420ad930793cc6d8c35d46b4ebc02c1f596085ef118349e58\": rpc error: code = NotFound desc = could not find container \"f32c4d2722e2fb8420ad930793cc6d8c35d46b4ebc02c1f596085ef118349e58\": container with ID starting with f32c4d2722e2fb8420ad930793cc6d8c35d46b4ebc02c1f596085ef118349e58 not found: ID does not exist" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.412649 4811 scope.go:117] "RemoveContainer" containerID="cdb14a6f78dc08f35d0277d3021badf851969040178e46ee5f71687e8a7f8d41" Feb 19 10:14:45 crc kubenswrapper[4811]: E0219 10:14:45.413037 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdb14a6f78dc08f35d0277d3021badf851969040178e46ee5f71687e8a7f8d41\": container with ID starting with cdb14a6f78dc08f35d0277d3021badf851969040178e46ee5f71687e8a7f8d41 not found: ID does not exist" containerID="cdb14a6f78dc08f35d0277d3021badf851969040178e46ee5f71687e8a7f8d41" Feb 19 10:14:45 crc kubenswrapper[4811]: I0219 10:14:45.413097 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdb14a6f78dc08f35d0277d3021badf851969040178e46ee5f71687e8a7f8d41"} err="failed to get container status \"cdb14a6f78dc08f35d0277d3021badf851969040178e46ee5f71687e8a7f8d41\": rpc error: code = NotFound desc = could not find container \"cdb14a6f78dc08f35d0277d3021badf851969040178e46ee5f71687e8a7f8d41\": container with ID starting with cdb14a6f78dc08f35d0277d3021badf851969040178e46ee5f71687e8a7f8d41 not found: ID does not exist" Feb 19 10:14:46 crc kubenswrapper[4811]: I0219 10:14:46.061205 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7lcg9" event={"ID":"5fd28c56-84ca-4b57-af19-9c11dc95b397","Type":"ContainerStarted","Data":"45cf30d8c62fdd8fc241d68f1fcbcc26872188a7f6a57b4347df0c3a3ba18f71"} Feb 19 10:14:46 crc kubenswrapper[4811]: I0219 10:14:46.061672 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7lcg9" event={"ID":"5fd28c56-84ca-4b57-af19-9c11dc95b397","Type":"ContainerStarted","Data":"c62fdd68a505d186fc5fd030e9f96331d58e022208936390e7117b65b804600b"} Feb 19 10:14:46 crc kubenswrapper[4811]: I0219 10:14:46.062278 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7lcg9" Feb 19 10:14:46 crc kubenswrapper[4811]: I0219 10:14:46.077814 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-7lcg9" Feb 19 10:14:46 crc kubenswrapper[4811]: I0219 10:14:46.089188 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-7lcg9" podStartSLOduration=2.089166875 podStartE2EDuration="2.089166875s" podCreationTimestamp="2026-02-19 10:14:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:14:46.085000203 +0000 UTC m=+374.611464889" watchObservedRunningTime="2026-02-19 10:14:46.089166875 +0000 UTC m=+374.615631561" Feb 19 10:14:46 crc kubenswrapper[4811]: I0219 10:14:46.590748 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="006d82c1-543d-45da-8e4b-75a557df7959" path="/var/lib/kubelet/pods/006d82c1-543d-45da-8e4b-75a557df7959/volumes" Feb 19 10:14:46 crc kubenswrapper[4811]: I0219 10:14:46.591511 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89c5cf4e-a301-48d2-8d31-056b6afc187b" path="/var/lib/kubelet/pods/89c5cf4e-a301-48d2-8d31-056b6afc187b/volumes" Feb 19 10:14:46 crc kubenswrapper[4811]: I0219 10:14:46.591989 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bad2100b-8b05-432c-96b8-19dbc3c5471c" path="/var/lib/kubelet/pods/bad2100b-8b05-432c-96b8-19dbc3c5471c/volumes" Feb 19 10:14:46 crc kubenswrapper[4811]: I0219 10:14:46.592649 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2f288dc-ac27-4a76-84da-d3f4e2dccf77" path="/var/lib/kubelet/pods/c2f288dc-ac27-4a76-84da-d3f4e2dccf77/volumes" Feb 19 10:14:46 crc kubenswrapper[4811]: I0219 10:14:46.593276 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc1f2b4f-13cd-486c-a09a-14c17b7e2c86" path="/var/lib/kubelet/pods/fc1f2b4f-13cd-486c-a09a-14c17b7e2c86/volumes" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.050390 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-grnvj"] Feb 19 10:14:47 crc kubenswrapper[4811]: E0219 10:14:47.050636 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bad2100b-8b05-432c-96b8-19dbc3c5471c" containerName="extract-content" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.050652 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="bad2100b-8b05-432c-96b8-19dbc3c5471c" containerName="extract-content" Feb 19 10:14:47 crc kubenswrapper[4811]: E0219 10:14:47.050666 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2f288dc-ac27-4a76-84da-d3f4e2dccf77" containerName="extract-content" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.050703 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2f288dc-ac27-4a76-84da-d3f4e2dccf77" containerName="extract-content" Feb 19 10:14:47 crc kubenswrapper[4811]: E0219 10:14:47.050717 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bad2100b-8b05-432c-96b8-19dbc3c5471c" containerName="extract-utilities" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.050725 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="bad2100b-8b05-432c-96b8-19dbc3c5471c" containerName="extract-utilities" Feb 19 10:14:47 crc kubenswrapper[4811]: E0219 10:14:47.050735 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc1f2b4f-13cd-486c-a09a-14c17b7e2c86" containerName="extract-utilities" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.050741 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc1f2b4f-13cd-486c-a09a-14c17b7e2c86" containerName="extract-utilities" Feb 19 10:14:47 crc kubenswrapper[4811]: E0219 10:14:47.050751 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc1f2b4f-13cd-486c-a09a-14c17b7e2c86" containerName="extract-content" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.050758 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc1f2b4f-13cd-486c-a09a-14c17b7e2c86" containerName="extract-content" Feb 19 10:14:47 crc kubenswrapper[4811]: E0219 10:14:47.050770 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc1f2b4f-13cd-486c-a09a-14c17b7e2c86" containerName="registry-server" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.050777 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc1f2b4f-13cd-486c-a09a-14c17b7e2c86" containerName="registry-server" Feb 19 10:14:47 crc kubenswrapper[4811]: E0219 10:14:47.050786 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bad2100b-8b05-432c-96b8-19dbc3c5471c" containerName="registry-server" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.050792 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="bad2100b-8b05-432c-96b8-19dbc3c5471c" containerName="registry-server" Feb 19 10:14:47 crc kubenswrapper[4811]: E0219 10:14:47.050804 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="006d82c1-543d-45da-8e4b-75a557df7959" containerName="extract-content" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.050811 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="006d82c1-543d-45da-8e4b-75a557df7959" containerName="extract-content" Feb 19 10:14:47 crc kubenswrapper[4811]: E0219 10:14:47.050820 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="006d82c1-543d-45da-8e4b-75a557df7959" containerName="registry-server" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.050828 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="006d82c1-543d-45da-8e4b-75a557df7959" containerName="registry-server" Feb 19 10:14:47 crc kubenswrapper[4811]: E0219 10:14:47.050838 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89c5cf4e-a301-48d2-8d31-056b6afc187b" containerName="marketplace-operator" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.050846 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="89c5cf4e-a301-48d2-8d31-056b6afc187b" containerName="marketplace-operator" Feb 19 10:14:47 crc kubenswrapper[4811]: E0219 10:14:47.050857 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="006d82c1-543d-45da-8e4b-75a557df7959" containerName="extract-utilities" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.050865 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="006d82c1-543d-45da-8e4b-75a557df7959" containerName="extract-utilities" Feb 19 10:14:47 crc kubenswrapper[4811]: E0219 10:14:47.050876 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2f288dc-ac27-4a76-84da-d3f4e2dccf77" containerName="extract-utilities" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.050883 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2f288dc-ac27-4a76-84da-d3f4e2dccf77" containerName="extract-utilities" Feb 19 10:14:47 crc kubenswrapper[4811]: E0219 10:14:47.050903 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2f288dc-ac27-4a76-84da-d3f4e2dccf77" containerName="registry-server" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.050910 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2f288dc-ac27-4a76-84da-d3f4e2dccf77" containerName="registry-server" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.051016 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="bad2100b-8b05-432c-96b8-19dbc3c5471c" containerName="registry-server" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.051029 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2f288dc-ac27-4a76-84da-d3f4e2dccf77" containerName="registry-server" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.051040 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="89c5cf4e-a301-48d2-8d31-056b6afc187b" containerName="marketplace-operator" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.051051 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="006d82c1-543d-45da-8e4b-75a557df7959" containerName="registry-server" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.051063 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc1f2b4f-13cd-486c-a09a-14c17b7e2c86" containerName="registry-server" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.051998 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-grnvj" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.056052 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.060561 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-grnvj"] Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.239260 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8983e037-aaa4-4bc6-82a1-8f312808a88a-catalog-content\") pod \"community-operators-grnvj\" (UID: \"8983e037-aaa4-4bc6-82a1-8f312808a88a\") " pod="openshift-marketplace/community-operators-grnvj" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.239327 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlmtf\" (UniqueName: \"kubernetes.io/projected/8983e037-aaa4-4bc6-82a1-8f312808a88a-kube-api-access-qlmtf\") pod \"community-operators-grnvj\" (UID: \"8983e037-aaa4-4bc6-82a1-8f312808a88a\") " pod="openshift-marketplace/community-operators-grnvj" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.239411 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8983e037-aaa4-4bc6-82a1-8f312808a88a-utilities\") pod \"community-operators-grnvj\" (UID: \"8983e037-aaa4-4bc6-82a1-8f312808a88a\") " pod="openshift-marketplace/community-operators-grnvj" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.340489 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8983e037-aaa4-4bc6-82a1-8f312808a88a-catalog-content\") pod \"community-operators-grnvj\" (UID: \"8983e037-aaa4-4bc6-82a1-8f312808a88a\") " pod="openshift-marketplace/community-operators-grnvj" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.340555 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlmtf\" (UniqueName: \"kubernetes.io/projected/8983e037-aaa4-4bc6-82a1-8f312808a88a-kube-api-access-qlmtf\") pod \"community-operators-grnvj\" (UID: \"8983e037-aaa4-4bc6-82a1-8f312808a88a\") " pod="openshift-marketplace/community-operators-grnvj" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.340609 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8983e037-aaa4-4bc6-82a1-8f312808a88a-utilities\") pod \"community-operators-grnvj\" (UID: \"8983e037-aaa4-4bc6-82a1-8f312808a88a\") " pod="openshift-marketplace/community-operators-grnvj" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.341123 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8983e037-aaa4-4bc6-82a1-8f312808a88a-utilities\") pod \"community-operators-grnvj\" (UID: \"8983e037-aaa4-4bc6-82a1-8f312808a88a\") " pod="openshift-marketplace/community-operators-grnvj" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.341263 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8983e037-aaa4-4bc6-82a1-8f312808a88a-catalog-content\") pod \"community-operators-grnvj\" (UID: \"8983e037-aaa4-4bc6-82a1-8f312808a88a\") " pod="openshift-marketplace/community-operators-grnvj" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.364914 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlmtf\" (UniqueName: \"kubernetes.io/projected/8983e037-aaa4-4bc6-82a1-8f312808a88a-kube-api-access-qlmtf\") pod \"community-operators-grnvj\" (UID: \"8983e037-aaa4-4bc6-82a1-8f312808a88a\") " pod="openshift-marketplace/community-operators-grnvj" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.384820 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-grnvj" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.655438 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h2c5f"] Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.656913 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h2c5f" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.660593 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.662290 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h2c5f"] Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.841963 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-grnvj"] Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.846869 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5045e58a-5dec-4419-823e-479ca95e924f-catalog-content\") pod \"certified-operators-h2c5f\" (UID: \"5045e58a-5dec-4419-823e-479ca95e924f\") " pod="openshift-marketplace/certified-operators-h2c5f" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.846979 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nzdz\" (UniqueName: \"kubernetes.io/projected/5045e58a-5dec-4419-823e-479ca95e924f-kube-api-access-8nzdz\") pod \"certified-operators-h2c5f\" (UID: \"5045e58a-5dec-4419-823e-479ca95e924f\") " pod="openshift-marketplace/certified-operators-h2c5f" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.847040 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5045e58a-5dec-4419-823e-479ca95e924f-utilities\") pod \"certified-operators-h2c5f\" (UID: \"5045e58a-5dec-4419-823e-479ca95e924f\") " pod="openshift-marketplace/certified-operators-h2c5f" Feb 19 10:14:47 crc kubenswrapper[4811]: W0219 10:14:47.850665 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8983e037_aaa4_4bc6_82a1_8f312808a88a.slice/crio-3fa5f93b2b040c4a55b513f3a4282ad123c8e1e63c5eb87a305bc2109c196752 WatchSource:0}: Error finding container 3fa5f93b2b040c4a55b513f3a4282ad123c8e1e63c5eb87a305bc2109c196752: Status 404 returned error can't find the container with id 3fa5f93b2b040c4a55b513f3a4282ad123c8e1e63c5eb87a305bc2109c196752 Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.948791 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nzdz\" (UniqueName: \"kubernetes.io/projected/5045e58a-5dec-4419-823e-479ca95e924f-kube-api-access-8nzdz\") pod \"certified-operators-h2c5f\" (UID: \"5045e58a-5dec-4419-823e-479ca95e924f\") " pod="openshift-marketplace/certified-operators-h2c5f" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.948888 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5045e58a-5dec-4419-823e-479ca95e924f-utilities\") pod \"certified-operators-h2c5f\" (UID: \"5045e58a-5dec-4419-823e-479ca95e924f\") " pod="openshift-marketplace/certified-operators-h2c5f" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.948926 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5045e58a-5dec-4419-823e-479ca95e924f-catalog-content\") pod \"certified-operators-h2c5f\" (UID: \"5045e58a-5dec-4419-823e-479ca95e924f\") " pod="openshift-marketplace/certified-operators-h2c5f" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.949513 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5045e58a-5dec-4419-823e-479ca95e924f-catalog-content\") pod \"certified-operators-h2c5f\" (UID: \"5045e58a-5dec-4419-823e-479ca95e924f\") " pod="openshift-marketplace/certified-operators-h2c5f" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.949847 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5045e58a-5dec-4419-823e-479ca95e924f-utilities\") pod \"certified-operators-h2c5f\" (UID: \"5045e58a-5dec-4419-823e-479ca95e924f\") " pod="openshift-marketplace/certified-operators-h2c5f" Feb 19 10:14:47 crc kubenswrapper[4811]: I0219 10:14:47.979584 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nzdz\" (UniqueName: \"kubernetes.io/projected/5045e58a-5dec-4419-823e-479ca95e924f-kube-api-access-8nzdz\") pod \"certified-operators-h2c5f\" (UID: \"5045e58a-5dec-4419-823e-479ca95e924f\") " pod="openshift-marketplace/certified-operators-h2c5f" Feb 19 10:14:48 crc kubenswrapper[4811]: I0219 10:14:48.083841 4811 generic.go:334] "Generic (PLEG): container finished" podID="8983e037-aaa4-4bc6-82a1-8f312808a88a" containerID="bea52145765ae5c6e3562ad03ecd134f37c220e986298f78d135c991607a11ec" exitCode=0 Feb 19 10:14:48 crc kubenswrapper[4811]: I0219 10:14:48.084104 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-grnvj" event={"ID":"8983e037-aaa4-4bc6-82a1-8f312808a88a","Type":"ContainerDied","Data":"bea52145765ae5c6e3562ad03ecd134f37c220e986298f78d135c991607a11ec"} Feb 19 10:14:48 crc kubenswrapper[4811]: I0219 10:14:48.084197 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-grnvj" event={"ID":"8983e037-aaa4-4bc6-82a1-8f312808a88a","Type":"ContainerStarted","Data":"3fa5f93b2b040c4a55b513f3a4282ad123c8e1e63c5eb87a305bc2109c196752"} Feb 19 10:14:48 crc kubenswrapper[4811]: I0219 10:14:48.278019 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h2c5f" Feb 19 10:14:48 crc kubenswrapper[4811]: I0219 10:14:48.765527 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h2c5f"] Feb 19 10:14:48 crc kubenswrapper[4811]: W0219 10:14:48.770930 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5045e58a_5dec_4419_823e_479ca95e924f.slice/crio-b4a76a93ed744849719a33a73405e259ca614990a66c7f03d256e56743b4c33f WatchSource:0}: Error finding container b4a76a93ed744849719a33a73405e259ca614990a66c7f03d256e56743b4c33f: Status 404 returned error can't find the container with id b4a76a93ed744849719a33a73405e259ca614990a66c7f03d256e56743b4c33f Feb 19 10:14:49 crc kubenswrapper[4811]: I0219 10:14:49.090555 4811 generic.go:334] "Generic (PLEG): container finished" podID="5045e58a-5dec-4419-823e-479ca95e924f" containerID="f7f77e6130c3ce0b95ceb069fec52d9a63c1190f0ed5e64e1a0750e3b8a11eb1" exitCode=0 Feb 19 10:14:49 crc kubenswrapper[4811]: I0219 10:14:49.090631 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h2c5f" event={"ID":"5045e58a-5dec-4419-823e-479ca95e924f","Type":"ContainerDied","Data":"f7f77e6130c3ce0b95ceb069fec52d9a63c1190f0ed5e64e1a0750e3b8a11eb1"} Feb 19 10:14:49 crc kubenswrapper[4811]: I0219 10:14:49.090668 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h2c5f" event={"ID":"5045e58a-5dec-4419-823e-479ca95e924f","Type":"ContainerStarted","Data":"b4a76a93ed744849719a33a73405e259ca614990a66c7f03d256e56743b4c33f"} Feb 19 10:14:49 crc kubenswrapper[4811]: I0219 10:14:49.092470 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-grnvj" event={"ID":"8983e037-aaa4-4bc6-82a1-8f312808a88a","Type":"ContainerStarted","Data":"a378f2b1613abc8dd482b7fbffdfce22b2b016270496751a1e630f8071f76cc0"} Feb 19 10:14:49 crc kubenswrapper[4811]: I0219 10:14:49.463971 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-82lbs"] Feb 19 10:14:49 crc kubenswrapper[4811]: I0219 10:14:49.465761 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-82lbs" Feb 19 10:14:49 crc kubenswrapper[4811]: I0219 10:14:49.469982 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 10:14:49 crc kubenswrapper[4811]: I0219 10:14:49.483565 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-82lbs"] Feb 19 10:14:49 crc kubenswrapper[4811]: I0219 10:14:49.573359 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp66v\" (UniqueName: \"kubernetes.io/projected/587cc9ad-4897-4e3e-b6ee-fdbc9ee4532e-kube-api-access-xp66v\") pod \"redhat-marketplace-82lbs\" (UID: \"587cc9ad-4897-4e3e-b6ee-fdbc9ee4532e\") " pod="openshift-marketplace/redhat-marketplace-82lbs" Feb 19 10:14:49 crc kubenswrapper[4811]: I0219 10:14:49.573552 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/587cc9ad-4897-4e3e-b6ee-fdbc9ee4532e-catalog-content\") pod \"redhat-marketplace-82lbs\" (UID: \"587cc9ad-4897-4e3e-b6ee-fdbc9ee4532e\") " pod="openshift-marketplace/redhat-marketplace-82lbs" Feb 19 10:14:49 crc kubenswrapper[4811]: I0219 10:14:49.573625 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/587cc9ad-4897-4e3e-b6ee-fdbc9ee4532e-utilities\") pod \"redhat-marketplace-82lbs\" (UID: \"587cc9ad-4897-4e3e-b6ee-fdbc9ee4532e\") " pod="openshift-marketplace/redhat-marketplace-82lbs" Feb 19 10:14:49 crc kubenswrapper[4811]: I0219 10:14:49.675467 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/587cc9ad-4897-4e3e-b6ee-fdbc9ee4532e-utilities\") pod \"redhat-marketplace-82lbs\" (UID: \"587cc9ad-4897-4e3e-b6ee-fdbc9ee4532e\") " pod="openshift-marketplace/redhat-marketplace-82lbs" Feb 19 10:14:49 crc kubenswrapper[4811]: I0219 10:14:49.675886 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp66v\" (UniqueName: \"kubernetes.io/projected/587cc9ad-4897-4e3e-b6ee-fdbc9ee4532e-kube-api-access-xp66v\") pod \"redhat-marketplace-82lbs\" (UID: \"587cc9ad-4897-4e3e-b6ee-fdbc9ee4532e\") " pod="openshift-marketplace/redhat-marketplace-82lbs" Feb 19 10:14:49 crc kubenswrapper[4811]: I0219 10:14:49.676014 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/587cc9ad-4897-4e3e-b6ee-fdbc9ee4532e-catalog-content\") pod \"redhat-marketplace-82lbs\" (UID: \"587cc9ad-4897-4e3e-b6ee-fdbc9ee4532e\") " pod="openshift-marketplace/redhat-marketplace-82lbs" Feb 19 10:14:49 crc kubenswrapper[4811]: I0219 10:14:49.676026 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/587cc9ad-4897-4e3e-b6ee-fdbc9ee4532e-utilities\") pod \"redhat-marketplace-82lbs\" (UID: \"587cc9ad-4897-4e3e-b6ee-fdbc9ee4532e\") " pod="openshift-marketplace/redhat-marketplace-82lbs" Feb 19 10:14:49 crc kubenswrapper[4811]: I0219 10:14:49.676229 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/587cc9ad-4897-4e3e-b6ee-fdbc9ee4532e-catalog-content\") pod \"redhat-marketplace-82lbs\" (UID: \"587cc9ad-4897-4e3e-b6ee-fdbc9ee4532e\") " pod="openshift-marketplace/redhat-marketplace-82lbs" Feb 19 10:14:49 crc kubenswrapper[4811]: I0219 10:14:49.701940 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp66v\" (UniqueName: \"kubernetes.io/projected/587cc9ad-4897-4e3e-b6ee-fdbc9ee4532e-kube-api-access-xp66v\") pod \"redhat-marketplace-82lbs\" (UID: \"587cc9ad-4897-4e3e-b6ee-fdbc9ee4532e\") " pod="openshift-marketplace/redhat-marketplace-82lbs" Feb 19 10:14:49 crc kubenswrapper[4811]: I0219 10:14:49.919413 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-82lbs" Feb 19 10:14:50 crc kubenswrapper[4811]: I0219 10:14:50.061110 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7r9tk"] Feb 19 10:14:50 crc kubenswrapper[4811]: I0219 10:14:50.068574 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7r9tk" Feb 19 10:14:50 crc kubenswrapper[4811]: I0219 10:14:50.072746 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7r9tk"] Feb 19 10:14:50 crc kubenswrapper[4811]: I0219 10:14:50.097804 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 10:14:50 crc kubenswrapper[4811]: I0219 10:14:50.107575 4811 generic.go:334] "Generic (PLEG): container finished" podID="8983e037-aaa4-4bc6-82a1-8f312808a88a" containerID="a378f2b1613abc8dd482b7fbffdfce22b2b016270496751a1e630f8071f76cc0" exitCode=0 Feb 19 10:14:50 crc kubenswrapper[4811]: I0219 10:14:50.107635 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-grnvj" event={"ID":"8983e037-aaa4-4bc6-82a1-8f312808a88a","Type":"ContainerDied","Data":"a378f2b1613abc8dd482b7fbffdfce22b2b016270496751a1e630f8071f76cc0"} Feb 19 10:14:50 crc kubenswrapper[4811]: I0219 10:14:50.183628 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ea7e423-8a9e-4b21-a701-0682236bec67-utilities\") pod \"redhat-operators-7r9tk\" (UID: \"0ea7e423-8a9e-4b21-a701-0682236bec67\") " pod="openshift-marketplace/redhat-operators-7r9tk" Feb 19 10:14:50 crc kubenswrapper[4811]: I0219 10:14:50.184009 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ea7e423-8a9e-4b21-a701-0682236bec67-catalog-content\") pod \"redhat-operators-7r9tk\" (UID: \"0ea7e423-8a9e-4b21-a701-0682236bec67\") " pod="openshift-marketplace/redhat-operators-7r9tk" Feb 19 10:14:50 crc kubenswrapper[4811]: I0219 10:14:50.184055 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bx86\" (UniqueName: \"kubernetes.io/projected/0ea7e423-8a9e-4b21-a701-0682236bec67-kube-api-access-8bx86\") pod \"redhat-operators-7r9tk\" (UID: \"0ea7e423-8a9e-4b21-a701-0682236bec67\") " pod="openshift-marketplace/redhat-operators-7r9tk" Feb 19 10:14:50 crc kubenswrapper[4811]: I0219 10:14:50.264505 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-82lbs"] Feb 19 10:14:50 crc kubenswrapper[4811]: W0219 10:14:50.276348 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod587cc9ad_4897_4e3e_b6ee_fdbc9ee4532e.slice/crio-7f3c859952644be571bcbd3f5e35cea7a7e2cc2fbec27d0367394822ca8bccf6 WatchSource:0}: Error finding container 7f3c859952644be571bcbd3f5e35cea7a7e2cc2fbec27d0367394822ca8bccf6: Status 404 returned error can't find the container with id 7f3c859952644be571bcbd3f5e35cea7a7e2cc2fbec27d0367394822ca8bccf6 Feb 19 10:14:50 crc kubenswrapper[4811]: I0219 10:14:50.285259 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ea7e423-8a9e-4b21-a701-0682236bec67-catalog-content\") pod \"redhat-operators-7r9tk\" (UID: \"0ea7e423-8a9e-4b21-a701-0682236bec67\") " pod="openshift-marketplace/redhat-operators-7r9tk" Feb 19 10:14:50 crc kubenswrapper[4811]: I0219 10:14:50.285311 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bx86\" (UniqueName: \"kubernetes.io/projected/0ea7e423-8a9e-4b21-a701-0682236bec67-kube-api-access-8bx86\") pod \"redhat-operators-7r9tk\" (UID: \"0ea7e423-8a9e-4b21-a701-0682236bec67\") " pod="openshift-marketplace/redhat-operators-7r9tk" Feb 19 10:14:50 crc kubenswrapper[4811]: I0219 10:14:50.285382 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ea7e423-8a9e-4b21-a701-0682236bec67-utilities\") pod \"redhat-operators-7r9tk\" (UID: \"0ea7e423-8a9e-4b21-a701-0682236bec67\") " pod="openshift-marketplace/redhat-operators-7r9tk" Feb 19 10:14:50 crc kubenswrapper[4811]: I0219 10:14:50.285815 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ea7e423-8a9e-4b21-a701-0682236bec67-catalog-content\") pod \"redhat-operators-7r9tk\" (UID: \"0ea7e423-8a9e-4b21-a701-0682236bec67\") " pod="openshift-marketplace/redhat-operators-7r9tk" Feb 19 10:14:50 crc kubenswrapper[4811]: I0219 10:14:50.285859 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ea7e423-8a9e-4b21-a701-0682236bec67-utilities\") pod \"redhat-operators-7r9tk\" (UID: \"0ea7e423-8a9e-4b21-a701-0682236bec67\") " pod="openshift-marketplace/redhat-operators-7r9tk" Feb 19 10:14:50 crc kubenswrapper[4811]: I0219 10:14:50.305482 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bx86\" (UniqueName: \"kubernetes.io/projected/0ea7e423-8a9e-4b21-a701-0682236bec67-kube-api-access-8bx86\") pod \"redhat-operators-7r9tk\" (UID: \"0ea7e423-8a9e-4b21-a701-0682236bec67\") " pod="openshift-marketplace/redhat-operators-7r9tk" Feb 19 10:14:50 crc kubenswrapper[4811]: I0219 10:14:50.451250 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7r9tk" Feb 19 10:14:50 crc kubenswrapper[4811]: I0219 10:14:50.894177 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7r9tk"] Feb 19 10:14:50 crc kubenswrapper[4811]: W0219 10:14:50.903275 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ea7e423_8a9e_4b21_a701_0682236bec67.slice/crio-c7f55edb911bbcdeb229ca17121cfead1bf822e554f5424b6e9ec5ce8d7c9a52 WatchSource:0}: Error finding container c7f55edb911bbcdeb229ca17121cfead1bf822e554f5424b6e9ec5ce8d7c9a52: Status 404 returned error can't find the container with id c7f55edb911bbcdeb229ca17121cfead1bf822e554f5424b6e9ec5ce8d7c9a52 Feb 19 10:14:51 crc kubenswrapper[4811]: I0219 10:14:51.117660 4811 generic.go:334] "Generic (PLEG): container finished" podID="5045e58a-5dec-4419-823e-479ca95e924f" containerID="211a80ac549df8588f3b25f8416e4cf6797278b725b9b5d1eb870fda22f15248" exitCode=0 Feb 19 10:14:51 crc kubenswrapper[4811]: I0219 10:14:51.117758 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h2c5f" event={"ID":"5045e58a-5dec-4419-823e-479ca95e924f","Type":"ContainerDied","Data":"211a80ac549df8588f3b25f8416e4cf6797278b725b9b5d1eb870fda22f15248"} Feb 19 10:14:51 crc kubenswrapper[4811]: I0219 10:14:51.126693 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-grnvj" event={"ID":"8983e037-aaa4-4bc6-82a1-8f312808a88a","Type":"ContainerStarted","Data":"f04a7f4daa6094d66ebc3306721117e96b145b9df33d0a75eab1e0e65b4b17e8"} Feb 19 10:14:51 crc kubenswrapper[4811]: I0219 10:14:51.131574 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7r9tk" event={"ID":"0ea7e423-8a9e-4b21-a701-0682236bec67","Type":"ContainerStarted","Data":"c7f55edb911bbcdeb229ca17121cfead1bf822e554f5424b6e9ec5ce8d7c9a52"} Feb 19 10:14:51 crc kubenswrapper[4811]: I0219 10:14:51.135330 4811 generic.go:334] "Generic (PLEG): container finished" podID="587cc9ad-4897-4e3e-b6ee-fdbc9ee4532e" containerID="8575c6526e414e096d5b5d1303d0a5c60fdf3b1a3656dd878b02070b1dda7d97" exitCode=0 Feb 19 10:14:51 crc kubenswrapper[4811]: I0219 10:14:51.135434 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82lbs" event={"ID":"587cc9ad-4897-4e3e-b6ee-fdbc9ee4532e","Type":"ContainerDied","Data":"8575c6526e414e096d5b5d1303d0a5c60fdf3b1a3656dd878b02070b1dda7d97"} Feb 19 10:14:51 crc kubenswrapper[4811]: I0219 10:14:51.135504 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82lbs" event={"ID":"587cc9ad-4897-4e3e-b6ee-fdbc9ee4532e","Type":"ContainerStarted","Data":"7f3c859952644be571bcbd3f5e35cea7a7e2cc2fbec27d0367394822ca8bccf6"} Feb 19 10:14:51 crc kubenswrapper[4811]: I0219 10:14:51.166163 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-grnvj" podStartSLOduration=1.6390527289999999 podStartE2EDuration="4.16612964s" podCreationTimestamp="2026-02-19 10:14:47 +0000 UTC" firstStartedPulling="2026-02-19 10:14:48.090503764 +0000 UTC m=+376.616968450" lastFinishedPulling="2026-02-19 10:14:50.617580675 +0000 UTC m=+379.144045361" observedRunningTime="2026-02-19 10:14:51.161901907 +0000 UTC m=+379.688366593" watchObservedRunningTime="2026-02-19 10:14:51.16612964 +0000 UTC m=+379.692594326" Feb 19 10:14:52 crc kubenswrapper[4811]: I0219 10:14:52.145271 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h2c5f" event={"ID":"5045e58a-5dec-4419-823e-479ca95e924f","Type":"ContainerStarted","Data":"640839e98f4c84982d91b4f9de5d2c5f5a4b172e30758affe889ea8e417ab703"} Feb 19 10:14:52 crc kubenswrapper[4811]: I0219 10:14:52.147717 4811 generic.go:334] "Generic (PLEG): container finished" podID="0ea7e423-8a9e-4b21-a701-0682236bec67" containerID="93ef3d647fc9662074d939dc3bf7841a8414c2291ff7f637aa59c396cd5d7ad0" exitCode=0 Feb 19 10:14:52 crc kubenswrapper[4811]: I0219 10:14:52.147795 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7r9tk" event={"ID":"0ea7e423-8a9e-4b21-a701-0682236bec67","Type":"ContainerDied","Data":"93ef3d647fc9662074d939dc3bf7841a8414c2291ff7f637aa59c396cd5d7ad0"} Feb 19 10:14:52 crc kubenswrapper[4811]: I0219 10:14:52.192745 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h2c5f" podStartSLOduration=2.740262906 podStartE2EDuration="5.192718918s" podCreationTimestamp="2026-02-19 10:14:47 +0000 UTC" firstStartedPulling="2026-02-19 10:14:49.093826089 +0000 UTC m=+377.620290775" lastFinishedPulling="2026-02-19 10:14:51.546282091 +0000 UTC m=+380.072746787" observedRunningTime="2026-02-19 10:14:52.168181581 +0000 UTC m=+380.694646267" watchObservedRunningTime="2026-02-19 10:14:52.192718918 +0000 UTC m=+380.719183604" Feb 19 10:14:53 crc kubenswrapper[4811]: I0219 10:14:53.158608 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7r9tk" event={"ID":"0ea7e423-8a9e-4b21-a701-0682236bec67","Type":"ContainerStarted","Data":"8489187f98296b464ea6992d9c2846c10bd03a91d76a4d373a8cfc4593cd1d5d"} Feb 19 10:14:53 crc kubenswrapper[4811]: I0219 10:14:53.162050 4811 generic.go:334] "Generic (PLEG): container finished" podID="587cc9ad-4897-4e3e-b6ee-fdbc9ee4532e" containerID="3e3a8d278b06720a2b17bd357237297f07aefda366954f59fd4e0ac35eae8507" exitCode=0 Feb 19 10:14:53 crc kubenswrapper[4811]: I0219 10:14:53.162154 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82lbs" event={"ID":"587cc9ad-4897-4e3e-b6ee-fdbc9ee4532e","Type":"ContainerDied","Data":"3e3a8d278b06720a2b17bd357237297f07aefda366954f59fd4e0ac35eae8507"} Feb 19 10:14:54 crc kubenswrapper[4811]: I0219 10:14:54.056826 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-q49nd"] Feb 19 10:14:54 crc kubenswrapper[4811]: I0219 10:14:54.058598 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-q49nd" Feb 19 10:14:54 crc kubenswrapper[4811]: I0219 10:14:54.074332 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-q49nd"] Feb 19 10:14:54 crc kubenswrapper[4811]: I0219 10:14:54.173407 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7r9tk" event={"ID":"0ea7e423-8a9e-4b21-a701-0682236bec67","Type":"ContainerDied","Data":"8489187f98296b464ea6992d9c2846c10bd03a91d76a4d373a8cfc4593cd1d5d"} Feb 19 10:14:54 crc kubenswrapper[4811]: I0219 10:14:54.173345 4811 generic.go:334] "Generic (PLEG): container finished" podID="0ea7e423-8a9e-4b21-a701-0682236bec67" containerID="8489187f98296b464ea6992d9c2846c10bd03a91d76a4d373a8cfc4593cd1d5d" exitCode=0 Feb 19 10:14:54 crc kubenswrapper[4811]: I0219 10:14:54.180264 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82lbs" event={"ID":"587cc9ad-4897-4e3e-b6ee-fdbc9ee4532e","Type":"ContainerStarted","Data":"c975e608ede8e0770999baa77946869cc3c81514ec6fa02e3fcb125bdc7cdd4d"} Feb 19 10:14:54 crc kubenswrapper[4811]: I0219 10:14:54.223164 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-82lbs" podStartSLOduration=2.739171041 podStartE2EDuration="5.223140027s" podCreationTimestamp="2026-02-19 10:14:49 +0000 UTC" firstStartedPulling="2026-02-19 10:14:51.139002803 +0000 UTC m=+379.665467489" lastFinishedPulling="2026-02-19 10:14:53.622971789 +0000 UTC m=+382.149436475" observedRunningTime="2026-02-19 10:14:54.21989296 +0000 UTC m=+382.746357646" watchObservedRunningTime="2026-02-19 10:14:54.223140027 +0000 UTC m=+382.749604713" Feb 19 10:14:54 crc kubenswrapper[4811]: I0219 10:14:54.245613 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/894497af-c660-4451-bc20-69ddedd6eb87-trusted-ca\") pod \"image-registry-66df7c8f76-q49nd\" (UID: \"894497af-c660-4451-bc20-69ddedd6eb87\") " pod="openshift-image-registry/image-registry-66df7c8f76-q49nd" Feb 19 10:14:54 crc kubenswrapper[4811]: I0219 10:14:54.245702 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/894497af-c660-4451-bc20-69ddedd6eb87-bound-sa-token\") pod \"image-registry-66df7c8f76-q49nd\" (UID: \"894497af-c660-4451-bc20-69ddedd6eb87\") " pod="openshift-image-registry/image-registry-66df7c8f76-q49nd" Feb 19 10:14:54 crc kubenswrapper[4811]: I0219 10:14:54.245784 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/894497af-c660-4451-bc20-69ddedd6eb87-installation-pull-secrets\") pod \"image-registry-66df7c8f76-q49nd\" (UID: \"894497af-c660-4451-bc20-69ddedd6eb87\") " pod="openshift-image-registry/image-registry-66df7c8f76-q49nd" Feb 19 10:14:54 crc kubenswrapper[4811]: I0219 10:14:54.245849 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-q49nd\" (UID: \"894497af-c660-4451-bc20-69ddedd6eb87\") " pod="openshift-image-registry/image-registry-66df7c8f76-q49nd" Feb 19 10:14:54 crc kubenswrapper[4811]: I0219 10:14:54.246048 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/894497af-c660-4451-bc20-69ddedd6eb87-registry-tls\") pod \"image-registry-66df7c8f76-q49nd\" (UID: \"894497af-c660-4451-bc20-69ddedd6eb87\") " pod="openshift-image-registry/image-registry-66df7c8f76-q49nd" Feb 19 10:14:54 crc kubenswrapper[4811]: I0219 10:14:54.246217 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f55q4\" (UniqueName: \"kubernetes.io/projected/894497af-c660-4451-bc20-69ddedd6eb87-kube-api-access-f55q4\") pod \"image-registry-66df7c8f76-q49nd\" (UID: \"894497af-c660-4451-bc20-69ddedd6eb87\") " pod="openshift-image-registry/image-registry-66df7c8f76-q49nd" Feb 19 10:14:54 crc kubenswrapper[4811]: I0219 10:14:54.246244 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/894497af-c660-4451-bc20-69ddedd6eb87-registry-certificates\") pod \"image-registry-66df7c8f76-q49nd\" (UID: \"894497af-c660-4451-bc20-69ddedd6eb87\") " pod="openshift-image-registry/image-registry-66df7c8f76-q49nd" Feb 19 10:14:54 crc kubenswrapper[4811]: I0219 10:14:54.246270 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/894497af-c660-4451-bc20-69ddedd6eb87-ca-trust-extracted\") pod \"image-registry-66df7c8f76-q49nd\" (UID: \"894497af-c660-4451-bc20-69ddedd6eb87\") " pod="openshift-image-registry/image-registry-66df7c8f76-q49nd" Feb 19 10:14:54 crc kubenswrapper[4811]: I0219 10:14:54.288057 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-q49nd\" (UID: \"894497af-c660-4451-bc20-69ddedd6eb87\") " pod="openshift-image-registry/image-registry-66df7c8f76-q49nd" Feb 19 10:14:54 crc kubenswrapper[4811]: I0219 10:14:54.347840 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/894497af-c660-4451-bc20-69ddedd6eb87-registry-tls\") pod \"image-registry-66df7c8f76-q49nd\" (UID: \"894497af-c660-4451-bc20-69ddedd6eb87\") " pod="openshift-image-registry/image-registry-66df7c8f76-q49nd" Feb 19 10:14:54 crc kubenswrapper[4811]: I0219 10:14:54.347963 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/894497af-c660-4451-bc20-69ddedd6eb87-registry-certificates\") pod \"image-registry-66df7c8f76-q49nd\" (UID: \"894497af-c660-4451-bc20-69ddedd6eb87\") " pod="openshift-image-registry/image-registry-66df7c8f76-q49nd" Feb 19 10:14:54 crc kubenswrapper[4811]: I0219 10:14:54.347980 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f55q4\" (UniqueName: \"kubernetes.io/projected/894497af-c660-4451-bc20-69ddedd6eb87-kube-api-access-f55q4\") pod \"image-registry-66df7c8f76-q49nd\" (UID: \"894497af-c660-4451-bc20-69ddedd6eb87\") " pod="openshift-image-registry/image-registry-66df7c8f76-q49nd" Feb 19 10:14:54 crc kubenswrapper[4811]: I0219 10:14:54.348865 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/894497af-c660-4451-bc20-69ddedd6eb87-ca-trust-extracted\") pod \"image-registry-66df7c8f76-q49nd\" (UID: \"894497af-c660-4451-bc20-69ddedd6eb87\") " pod="openshift-image-registry/image-registry-66df7c8f76-q49nd" Feb 19 10:14:54 crc kubenswrapper[4811]: I0219 10:14:54.349288 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/894497af-c660-4451-bc20-69ddedd6eb87-ca-trust-extracted\") pod \"image-registry-66df7c8f76-q49nd\" (UID: \"894497af-c660-4451-bc20-69ddedd6eb87\") " pod="openshift-image-registry/image-registry-66df7c8f76-q49nd" Feb 19 10:14:54 crc kubenswrapper[4811]: I0219 10:14:54.349372 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/894497af-c660-4451-bc20-69ddedd6eb87-trusted-ca\") pod \"image-registry-66df7c8f76-q49nd\" (UID: \"894497af-c660-4451-bc20-69ddedd6eb87\") " pod="openshift-image-registry/image-registry-66df7c8f76-q49nd" Feb 19 10:14:54 crc kubenswrapper[4811]: I0219 10:14:54.349741 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/894497af-c660-4451-bc20-69ddedd6eb87-registry-certificates\") pod \"image-registry-66df7c8f76-q49nd\" (UID: \"894497af-c660-4451-bc20-69ddedd6eb87\") " pod="openshift-image-registry/image-registry-66df7c8f76-q49nd" Feb 19 10:14:54 crc kubenswrapper[4811]: I0219 10:14:54.351047 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/894497af-c660-4451-bc20-69ddedd6eb87-trusted-ca\") pod \"image-registry-66df7c8f76-q49nd\" (UID: \"894497af-c660-4451-bc20-69ddedd6eb87\") " pod="openshift-image-registry/image-registry-66df7c8f76-q49nd" Feb 19 10:14:54 crc kubenswrapper[4811]: I0219 10:14:54.349407 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/894497af-c660-4451-bc20-69ddedd6eb87-bound-sa-token\") pod \"image-registry-66df7c8f76-q49nd\" (UID: \"894497af-c660-4451-bc20-69ddedd6eb87\") " pod="openshift-image-registry/image-registry-66df7c8f76-q49nd" Feb 19 10:14:54 crc kubenswrapper[4811]: I0219 10:14:54.351184 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/894497af-c660-4451-bc20-69ddedd6eb87-installation-pull-secrets\") pod \"image-registry-66df7c8f76-q49nd\" (UID: \"894497af-c660-4451-bc20-69ddedd6eb87\") " pod="openshift-image-registry/image-registry-66df7c8f76-q49nd" Feb 19 10:14:54 crc kubenswrapper[4811]: I0219 10:14:54.356124 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/894497af-c660-4451-bc20-69ddedd6eb87-installation-pull-secrets\") pod \"image-registry-66df7c8f76-q49nd\" (UID: \"894497af-c660-4451-bc20-69ddedd6eb87\") " pod="openshift-image-registry/image-registry-66df7c8f76-q49nd" Feb 19 10:14:54 crc kubenswrapper[4811]: I0219 10:14:54.356214 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/894497af-c660-4451-bc20-69ddedd6eb87-registry-tls\") pod \"image-registry-66df7c8f76-q49nd\" (UID: \"894497af-c660-4451-bc20-69ddedd6eb87\") " pod="openshift-image-registry/image-registry-66df7c8f76-q49nd" Feb 19 10:14:54 crc kubenswrapper[4811]: I0219 10:14:54.367950 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/894497af-c660-4451-bc20-69ddedd6eb87-bound-sa-token\") pod \"image-registry-66df7c8f76-q49nd\" (UID: \"894497af-c660-4451-bc20-69ddedd6eb87\") " pod="openshift-image-registry/image-registry-66df7c8f76-q49nd" Feb 19 10:14:54 crc kubenswrapper[4811]: I0219 10:14:54.369341 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f55q4\" (UniqueName: \"kubernetes.io/projected/894497af-c660-4451-bc20-69ddedd6eb87-kube-api-access-f55q4\") pod \"image-registry-66df7c8f76-q49nd\" (UID: \"894497af-c660-4451-bc20-69ddedd6eb87\") " pod="openshift-image-registry/image-registry-66df7c8f76-q49nd" Feb 19 10:14:54 crc kubenswrapper[4811]: I0219 10:14:54.379008 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-q49nd" Feb 19 10:14:54 crc kubenswrapper[4811]: I0219 10:14:54.636010 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-q49nd"] Feb 19 10:14:55 crc kubenswrapper[4811]: I0219 10:14:55.187887 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-q49nd" event={"ID":"894497af-c660-4451-bc20-69ddedd6eb87","Type":"ContainerStarted","Data":"dd9e985867d98f052c50a079a94a014e58b1589808997855bd35ee25c4d18ce8"} Feb 19 10:14:56 crc kubenswrapper[4811]: I0219 10:14:56.194914 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-q49nd" event={"ID":"894497af-c660-4451-bc20-69ddedd6eb87","Type":"ContainerStarted","Data":"97c5cfa12de8932d935adda6bafd55183c5bb89d4f94b5da799155c504b02d1a"} Feb 19 10:14:56 crc kubenswrapper[4811]: I0219 10:14:56.195714 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-q49nd" Feb 19 10:14:56 crc kubenswrapper[4811]: I0219 10:14:56.197368 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7r9tk" event={"ID":"0ea7e423-8a9e-4b21-a701-0682236bec67","Type":"ContainerStarted","Data":"62797e25d08b0a8ba2dd9faab3bd8d73041f00cb5a28a725cc8ed8c203bee925"} Feb 19 10:14:56 crc kubenswrapper[4811]: I0219 10:14:56.214213 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-q49nd" podStartSLOduration=2.214195939 podStartE2EDuration="2.214195939s" podCreationTimestamp="2026-02-19 10:14:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:14:56.212263748 +0000 UTC m=+384.738728444" watchObservedRunningTime="2026-02-19 10:14:56.214195939 +0000 UTC m=+384.740660615" Feb 19 10:14:56 crc kubenswrapper[4811]: I0219 10:14:56.232811 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7r9tk" podStartSLOduration=3.85620095 podStartE2EDuration="6.232792598s" podCreationTimestamp="2026-02-19 10:14:50 +0000 UTC" firstStartedPulling="2026-02-19 10:14:52.22710117 +0000 UTC m=+380.753565856" lastFinishedPulling="2026-02-19 10:14:54.603692818 +0000 UTC m=+383.130157504" observedRunningTime="2026-02-19 10:14:56.231170625 +0000 UTC m=+384.757635331" watchObservedRunningTime="2026-02-19 10:14:56.232792598 +0000 UTC m=+384.759257284" Feb 19 10:14:56 crc kubenswrapper[4811]: I0219 10:14:56.787895 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:14:56 crc kubenswrapper[4811]: I0219 10:14:56.787960 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:14:57 crc kubenswrapper[4811]: I0219 10:14:57.385777 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-grnvj" Feb 19 10:14:57 crc kubenswrapper[4811]: I0219 10:14:57.386415 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-grnvj" Feb 19 10:14:57 crc kubenswrapper[4811]: I0219 10:14:57.437613 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-grnvj" Feb 19 10:14:58 crc kubenswrapper[4811]: I0219 10:14:58.252415 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-grnvj" Feb 19 10:14:58 crc kubenswrapper[4811]: I0219 10:14:58.279191 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h2c5f" Feb 19 10:14:58 crc kubenswrapper[4811]: I0219 10:14:58.279865 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h2c5f" Feb 19 10:14:58 crc kubenswrapper[4811]: I0219 10:14:58.317762 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h2c5f" Feb 19 10:14:59 crc kubenswrapper[4811]: I0219 10:14:59.255064 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h2c5f" Feb 19 10:14:59 crc kubenswrapper[4811]: I0219 10:14:59.919523 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-82lbs" Feb 19 10:14:59 crc kubenswrapper[4811]: I0219 10:14:59.920002 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-82lbs" Feb 19 10:14:59 crc kubenswrapper[4811]: I0219 10:14:59.974028 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-82lbs" Feb 19 10:15:00 crc kubenswrapper[4811]: I0219 10:15:00.175318 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524935-q8dsd"] Feb 19 10:15:00 crc kubenswrapper[4811]: I0219 10:15:00.176411 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-q8dsd" Feb 19 10:15:00 crc kubenswrapper[4811]: I0219 10:15:00.178461 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 10:15:00 crc kubenswrapper[4811]: I0219 10:15:00.179024 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 10:15:00 crc kubenswrapper[4811]: I0219 10:15:00.190814 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524935-q8dsd"] Feb 19 10:15:00 crc kubenswrapper[4811]: I0219 10:15:00.253902 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-82lbs" Feb 19 10:15:00 crc kubenswrapper[4811]: I0219 10:15:00.352913 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3a58cbe-c919-45dd-ac89-f6b9c89dcb70-secret-volume\") pod \"collect-profiles-29524935-q8dsd\" (UID: \"e3a58cbe-c919-45dd-ac89-f6b9c89dcb70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-q8dsd" Feb 19 10:15:00 crc kubenswrapper[4811]: I0219 10:15:00.353067 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3a58cbe-c919-45dd-ac89-f6b9c89dcb70-config-volume\") pod \"collect-profiles-29524935-q8dsd\" (UID: \"e3a58cbe-c919-45dd-ac89-f6b9c89dcb70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-q8dsd" Feb 19 10:15:00 crc kubenswrapper[4811]: I0219 10:15:00.353104 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6sk9\" (UniqueName: \"kubernetes.io/projected/e3a58cbe-c919-45dd-ac89-f6b9c89dcb70-kube-api-access-n6sk9\") pod \"collect-profiles-29524935-q8dsd\" (UID: \"e3a58cbe-c919-45dd-ac89-f6b9c89dcb70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-q8dsd" Feb 19 10:15:00 crc kubenswrapper[4811]: I0219 10:15:00.451551 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7r9tk" Feb 19 10:15:00 crc kubenswrapper[4811]: I0219 10:15:00.451663 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7r9tk" Feb 19 10:15:00 crc kubenswrapper[4811]: I0219 10:15:00.455210 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3a58cbe-c919-45dd-ac89-f6b9c89dcb70-config-volume\") pod \"collect-profiles-29524935-q8dsd\" (UID: \"e3a58cbe-c919-45dd-ac89-f6b9c89dcb70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-q8dsd" Feb 19 10:15:00 crc kubenswrapper[4811]: I0219 10:15:00.455276 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3a58cbe-c919-45dd-ac89-f6b9c89dcb70-config-volume\") pod \"collect-profiles-29524935-q8dsd\" (UID: \"e3a58cbe-c919-45dd-ac89-f6b9c89dcb70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-q8dsd" Feb 19 10:15:00 crc kubenswrapper[4811]: I0219 10:15:00.455690 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sk9\" (UniqueName: \"kubernetes.io/projected/e3a58cbe-c919-45dd-ac89-f6b9c89dcb70-kube-api-access-n6sk9\") pod \"collect-profiles-29524935-q8dsd\" (UID: \"e3a58cbe-c919-45dd-ac89-f6b9c89dcb70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-q8dsd" Feb 19 10:15:00 crc kubenswrapper[4811]: I0219 10:15:00.456511 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3a58cbe-c919-45dd-ac89-f6b9c89dcb70-secret-volume\") pod \"collect-profiles-29524935-q8dsd\" (UID: \"e3a58cbe-c919-45dd-ac89-f6b9c89dcb70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-q8dsd" Feb 19 10:15:00 crc kubenswrapper[4811]: I0219 10:15:00.461980 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3a58cbe-c919-45dd-ac89-f6b9c89dcb70-secret-volume\") pod \"collect-profiles-29524935-q8dsd\" (UID: \"e3a58cbe-c919-45dd-ac89-f6b9c89dcb70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-q8dsd" Feb 19 10:15:00 crc kubenswrapper[4811]: I0219 10:15:00.473259 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6sk9\" (UniqueName: \"kubernetes.io/projected/e3a58cbe-c919-45dd-ac89-f6b9c89dcb70-kube-api-access-n6sk9\") pod \"collect-profiles-29524935-q8dsd\" (UID: \"e3a58cbe-c919-45dd-ac89-f6b9c89dcb70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-q8dsd" Feb 19 10:15:00 crc kubenswrapper[4811]: I0219 10:15:00.511650 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-q8dsd" Feb 19 10:15:00 crc kubenswrapper[4811]: I0219 10:15:00.956217 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524935-q8dsd"] Feb 19 10:15:01 crc kubenswrapper[4811]: I0219 10:15:01.255505 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-q8dsd" event={"ID":"e3a58cbe-c919-45dd-ac89-f6b9c89dcb70","Type":"ContainerStarted","Data":"4e0011a78c646b126bd8aa44f34404338d4c7e372a05a372382ba1c9cb681541"} Feb 19 10:15:01 crc kubenswrapper[4811]: I0219 10:15:01.255902 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-q8dsd" event={"ID":"e3a58cbe-c919-45dd-ac89-f6b9c89dcb70","Type":"ContainerStarted","Data":"f2588712185a368eb28595611ef40720cff746d28cfa46e35489da8ab12df8c9"} Feb 19 10:15:01 crc kubenswrapper[4811]: I0219 10:15:01.279584 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-q8dsd" podStartSLOduration=1.279558554 podStartE2EDuration="1.279558554s" podCreationTimestamp="2026-02-19 10:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:15:01.273202723 +0000 UTC m=+389.799667409" watchObservedRunningTime="2026-02-19 10:15:01.279558554 +0000 UTC m=+389.806023240" Feb 19 10:15:01 crc kubenswrapper[4811]: I0219 10:15:01.493439 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7r9tk" podUID="0ea7e423-8a9e-4b21-a701-0682236bec67" containerName="registry-server" probeResult="failure" output=< Feb 19 10:15:01 crc kubenswrapper[4811]: timeout: failed to connect service ":50051" within 1s Feb 19 10:15:01 crc kubenswrapper[4811]: > Feb 19 10:15:02 crc kubenswrapper[4811]: I0219 10:15:02.264546 4811 generic.go:334] "Generic (PLEG): container finished" podID="e3a58cbe-c919-45dd-ac89-f6b9c89dcb70" containerID="4e0011a78c646b126bd8aa44f34404338d4c7e372a05a372382ba1c9cb681541" exitCode=0 Feb 19 10:15:02 crc kubenswrapper[4811]: I0219 10:15:02.264598 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-q8dsd" event={"ID":"e3a58cbe-c919-45dd-ac89-f6b9c89dcb70","Type":"ContainerDied","Data":"4e0011a78c646b126bd8aa44f34404338d4c7e372a05a372382ba1c9cb681541"} Feb 19 10:15:03 crc kubenswrapper[4811]: I0219 10:15:03.528269 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-q8dsd" Feb 19 10:15:03 crc kubenswrapper[4811]: I0219 10:15:03.719191 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3a58cbe-c919-45dd-ac89-f6b9c89dcb70-config-volume\") pod \"e3a58cbe-c919-45dd-ac89-f6b9c89dcb70\" (UID: \"e3a58cbe-c919-45dd-ac89-f6b9c89dcb70\") " Feb 19 10:15:03 crc kubenswrapper[4811]: I0219 10:15:03.719264 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6sk9\" (UniqueName: \"kubernetes.io/projected/e3a58cbe-c919-45dd-ac89-f6b9c89dcb70-kube-api-access-n6sk9\") pod \"e3a58cbe-c919-45dd-ac89-f6b9c89dcb70\" (UID: \"e3a58cbe-c919-45dd-ac89-f6b9c89dcb70\") " Feb 19 10:15:03 crc kubenswrapper[4811]: I0219 10:15:03.719291 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3a58cbe-c919-45dd-ac89-f6b9c89dcb70-secret-volume\") pod \"e3a58cbe-c919-45dd-ac89-f6b9c89dcb70\" (UID: \"e3a58cbe-c919-45dd-ac89-f6b9c89dcb70\") " Feb 19 10:15:03 crc kubenswrapper[4811]: I0219 10:15:03.720529 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3a58cbe-c919-45dd-ac89-f6b9c89dcb70-config-volume" (OuterVolumeSpecName: "config-volume") pod "e3a58cbe-c919-45dd-ac89-f6b9c89dcb70" (UID: "e3a58cbe-c919-45dd-ac89-f6b9c89dcb70"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:15:03 crc kubenswrapper[4811]: I0219 10:15:03.724681 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3a58cbe-c919-45dd-ac89-f6b9c89dcb70-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e3a58cbe-c919-45dd-ac89-f6b9c89dcb70" (UID: "e3a58cbe-c919-45dd-ac89-f6b9c89dcb70"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:15:03 crc kubenswrapper[4811]: I0219 10:15:03.725557 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3a58cbe-c919-45dd-ac89-f6b9c89dcb70-kube-api-access-n6sk9" (OuterVolumeSpecName: "kube-api-access-n6sk9") pod "e3a58cbe-c919-45dd-ac89-f6b9c89dcb70" (UID: "e3a58cbe-c919-45dd-ac89-f6b9c89dcb70"). InnerVolumeSpecName "kube-api-access-n6sk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:15:03 crc kubenswrapper[4811]: I0219 10:15:03.821047 4811 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3a58cbe-c919-45dd-ac89-f6b9c89dcb70-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:03 crc kubenswrapper[4811]: I0219 10:15:03.821090 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6sk9\" (UniqueName: \"kubernetes.io/projected/e3a58cbe-c919-45dd-ac89-f6b9c89dcb70-kube-api-access-n6sk9\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:03 crc kubenswrapper[4811]: I0219 10:15:03.821107 4811 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3a58cbe-c919-45dd-ac89-f6b9c89dcb70-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:04 crc kubenswrapper[4811]: I0219 10:15:04.294780 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-q8dsd" event={"ID":"e3a58cbe-c919-45dd-ac89-f6b9c89dcb70","Type":"ContainerDied","Data":"f2588712185a368eb28595611ef40720cff746d28cfa46e35489da8ab12df8c9"} Feb 19 10:15:04 crc kubenswrapper[4811]: I0219 10:15:04.294833 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2588712185a368eb28595611ef40720cff746d28cfa46e35489da8ab12df8c9" Feb 19 10:15:04 crc kubenswrapper[4811]: I0219 10:15:04.294886 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524935-q8dsd" Feb 19 10:15:08 crc kubenswrapper[4811]: I0219 10:15:08.074585 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6c5c8764-8gq58"] Feb 19 10:15:08 crc kubenswrapper[4811]: I0219 10:15:08.075601 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6c5c8764-8gq58" podUID="2a60ff98-02c4-4af6-a628-ffb1f3e8d477" containerName="controller-manager" containerID="cri-o://fc0d6143b63c6ec46f545feb0723adaf0d56576b8f3ed3ddaf8b8c4802880fb0" gracePeriod=30 Feb 19 10:15:08 crc kubenswrapper[4811]: I0219 10:15:08.319069 4811 generic.go:334] "Generic (PLEG): container finished" podID="2a60ff98-02c4-4af6-a628-ffb1f3e8d477" containerID="fc0d6143b63c6ec46f545feb0723adaf0d56576b8f3ed3ddaf8b8c4802880fb0" exitCode=0 Feb 19 10:15:08 crc kubenswrapper[4811]: I0219 10:15:08.319121 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c5c8764-8gq58" event={"ID":"2a60ff98-02c4-4af6-a628-ffb1f3e8d477","Type":"ContainerDied","Data":"fc0d6143b63c6ec46f545feb0723adaf0d56576b8f3ed3ddaf8b8c4802880fb0"} Feb 19 10:15:08 crc kubenswrapper[4811]: I0219 10:15:08.497417 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c5c8764-8gq58" Feb 19 10:15:08 crc kubenswrapper[4811]: I0219 10:15:08.680359 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a60ff98-02c4-4af6-a628-ffb1f3e8d477-client-ca\") pod \"2a60ff98-02c4-4af6-a628-ffb1f3e8d477\" (UID: \"2a60ff98-02c4-4af6-a628-ffb1f3e8d477\") " Feb 19 10:15:08 crc kubenswrapper[4811]: I0219 10:15:08.680492 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a60ff98-02c4-4af6-a628-ffb1f3e8d477-serving-cert\") pod \"2a60ff98-02c4-4af6-a628-ffb1f3e8d477\" (UID: \"2a60ff98-02c4-4af6-a628-ffb1f3e8d477\") " Feb 19 10:15:08 crc kubenswrapper[4811]: I0219 10:15:08.680544 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pz86p\" (UniqueName: \"kubernetes.io/projected/2a60ff98-02c4-4af6-a628-ffb1f3e8d477-kube-api-access-pz86p\") pod \"2a60ff98-02c4-4af6-a628-ffb1f3e8d477\" (UID: \"2a60ff98-02c4-4af6-a628-ffb1f3e8d477\") " Feb 19 10:15:08 crc kubenswrapper[4811]: I0219 10:15:08.680611 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a60ff98-02c4-4af6-a628-ffb1f3e8d477-proxy-ca-bundles\") pod \"2a60ff98-02c4-4af6-a628-ffb1f3e8d477\" (UID: \"2a60ff98-02c4-4af6-a628-ffb1f3e8d477\") " Feb 19 10:15:08 crc kubenswrapper[4811]: I0219 10:15:08.680714 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a60ff98-02c4-4af6-a628-ffb1f3e8d477-config\") pod \"2a60ff98-02c4-4af6-a628-ffb1f3e8d477\" (UID: \"2a60ff98-02c4-4af6-a628-ffb1f3e8d477\") " Feb 19 10:15:08 crc kubenswrapper[4811]: I0219 10:15:08.681367 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a60ff98-02c4-4af6-a628-ffb1f3e8d477-client-ca" (OuterVolumeSpecName: "client-ca") pod "2a60ff98-02c4-4af6-a628-ffb1f3e8d477" (UID: "2a60ff98-02c4-4af6-a628-ffb1f3e8d477"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:15:08 crc kubenswrapper[4811]: I0219 10:15:08.681427 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a60ff98-02c4-4af6-a628-ffb1f3e8d477-config" (OuterVolumeSpecName: "config") pod "2a60ff98-02c4-4af6-a628-ffb1f3e8d477" (UID: "2a60ff98-02c4-4af6-a628-ffb1f3e8d477"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:15:08 crc kubenswrapper[4811]: I0219 10:15:08.681438 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a60ff98-02c4-4af6-a628-ffb1f3e8d477-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2a60ff98-02c4-4af6-a628-ffb1f3e8d477" (UID: "2a60ff98-02c4-4af6-a628-ffb1f3e8d477"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:15:08 crc kubenswrapper[4811]: I0219 10:15:08.685398 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a60ff98-02c4-4af6-a628-ffb1f3e8d477-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2a60ff98-02c4-4af6-a628-ffb1f3e8d477" (UID: "2a60ff98-02c4-4af6-a628-ffb1f3e8d477"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:15:08 crc kubenswrapper[4811]: I0219 10:15:08.686433 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a60ff98-02c4-4af6-a628-ffb1f3e8d477-kube-api-access-pz86p" (OuterVolumeSpecName: "kube-api-access-pz86p") pod "2a60ff98-02c4-4af6-a628-ffb1f3e8d477" (UID: "2a60ff98-02c4-4af6-a628-ffb1f3e8d477"). InnerVolumeSpecName "kube-api-access-pz86p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:15:08 crc kubenswrapper[4811]: I0219 10:15:08.781859 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a60ff98-02c4-4af6-a628-ffb1f3e8d477-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:08 crc kubenswrapper[4811]: I0219 10:15:08.781900 4811 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a60ff98-02c4-4af6-a628-ffb1f3e8d477-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:08 crc kubenswrapper[4811]: I0219 10:15:08.781913 4811 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a60ff98-02c4-4af6-a628-ffb1f3e8d477-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:08 crc kubenswrapper[4811]: I0219 10:15:08.781925 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pz86p\" (UniqueName: \"kubernetes.io/projected/2a60ff98-02c4-4af6-a628-ffb1f3e8d477-kube-api-access-pz86p\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:08 crc kubenswrapper[4811]: I0219 10:15:08.781941 4811 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a60ff98-02c4-4af6-a628-ffb1f3e8d477-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:09 crc kubenswrapper[4811]: I0219 10:15:09.327934 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c5c8764-8gq58" event={"ID":"2a60ff98-02c4-4af6-a628-ffb1f3e8d477","Type":"ContainerDied","Data":"24150ddd9030f033e34124cb6dfd25e7af3cf2f338885be8a5305e5e5b517d6c"} Feb 19 10:15:09 crc kubenswrapper[4811]: I0219 10:15:09.328533 4811 scope.go:117] "RemoveContainer" containerID="fc0d6143b63c6ec46f545feb0723adaf0d56576b8f3ed3ddaf8b8c4802880fb0" Feb 19 10:15:09 crc kubenswrapper[4811]: I0219 10:15:09.328023 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c5c8764-8gq58" Feb 19 10:15:09 crc kubenswrapper[4811]: I0219 10:15:09.367044 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6c5c8764-8gq58"] Feb 19 10:15:09 crc kubenswrapper[4811]: I0219 10:15:09.370968 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6c5c8764-8gq58"] Feb 19 10:15:09 crc kubenswrapper[4811]: I0219 10:15:09.521043 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6b8f6c6657-gw9js"] Feb 19 10:15:09 crc kubenswrapper[4811]: E0219 10:15:09.521389 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3a58cbe-c919-45dd-ac89-f6b9c89dcb70" containerName="collect-profiles" Feb 19 10:15:09 crc kubenswrapper[4811]: I0219 10:15:09.521407 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3a58cbe-c919-45dd-ac89-f6b9c89dcb70" containerName="collect-profiles" Feb 19 10:15:09 crc kubenswrapper[4811]: E0219 10:15:09.521433 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a60ff98-02c4-4af6-a628-ffb1f3e8d477" containerName="controller-manager" Feb 19 10:15:09 crc kubenswrapper[4811]: I0219 10:15:09.521440 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a60ff98-02c4-4af6-a628-ffb1f3e8d477" containerName="controller-manager" Feb 19 10:15:09 crc kubenswrapper[4811]: I0219 10:15:09.521650 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3a58cbe-c919-45dd-ac89-f6b9c89dcb70" containerName="collect-profiles" Feb 19 10:15:09 crc kubenswrapper[4811]: I0219 10:15:09.521676 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a60ff98-02c4-4af6-a628-ffb1f3e8d477" containerName="controller-manager" Feb 19 10:15:09 crc kubenswrapper[4811]: I0219 10:15:09.522335 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b8f6c6657-gw9js" Feb 19 10:15:09 crc kubenswrapper[4811]: I0219 10:15:09.526632 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 10:15:09 crc kubenswrapper[4811]: I0219 10:15:09.529838 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 10:15:09 crc kubenswrapper[4811]: I0219 10:15:09.531043 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 10:15:09 crc kubenswrapper[4811]: I0219 10:15:09.531377 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 10:15:09 crc kubenswrapper[4811]: I0219 10:15:09.531954 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 10:15:09 crc kubenswrapper[4811]: I0219 10:15:09.532656 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 10:15:09 crc kubenswrapper[4811]: I0219 10:15:09.534662 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 10:15:09 crc kubenswrapper[4811]: I0219 10:15:09.537167 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b8f6c6657-gw9js"] Feb 19 10:15:09 crc kubenswrapper[4811]: I0219 10:15:09.693705 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db561e5d-154f-46ca-b479-3be663f44038-client-ca\") pod \"controller-manager-6b8f6c6657-gw9js\" (UID: \"db561e5d-154f-46ca-b479-3be663f44038\") " pod="openshift-controller-manager/controller-manager-6b8f6c6657-gw9js" Feb 19 10:15:09 crc kubenswrapper[4811]: I0219 10:15:09.693772 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db561e5d-154f-46ca-b479-3be663f44038-config\") pod \"controller-manager-6b8f6c6657-gw9js\" (UID: \"db561e5d-154f-46ca-b479-3be663f44038\") " pod="openshift-controller-manager/controller-manager-6b8f6c6657-gw9js" Feb 19 10:15:09 crc kubenswrapper[4811]: I0219 10:15:09.693804 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db561e5d-154f-46ca-b479-3be663f44038-proxy-ca-bundles\") pod \"controller-manager-6b8f6c6657-gw9js\" (UID: \"db561e5d-154f-46ca-b479-3be663f44038\") " pod="openshift-controller-manager/controller-manager-6b8f6c6657-gw9js" Feb 19 10:15:09 crc kubenswrapper[4811]: I0219 10:15:09.693832 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvflc\" (UniqueName: \"kubernetes.io/projected/db561e5d-154f-46ca-b479-3be663f44038-kube-api-access-zvflc\") pod \"controller-manager-6b8f6c6657-gw9js\" (UID: \"db561e5d-154f-46ca-b479-3be663f44038\") " pod="openshift-controller-manager/controller-manager-6b8f6c6657-gw9js" Feb 19 10:15:09 crc kubenswrapper[4811]: I0219 10:15:09.694426 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db561e5d-154f-46ca-b479-3be663f44038-serving-cert\") pod \"controller-manager-6b8f6c6657-gw9js\" (UID: \"db561e5d-154f-46ca-b479-3be663f44038\") " pod="openshift-controller-manager/controller-manager-6b8f6c6657-gw9js" Feb 19 10:15:09 crc kubenswrapper[4811]: I0219 10:15:09.796062 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db561e5d-154f-46ca-b479-3be663f44038-client-ca\") pod \"controller-manager-6b8f6c6657-gw9js\" (UID: \"db561e5d-154f-46ca-b479-3be663f44038\") " pod="openshift-controller-manager/controller-manager-6b8f6c6657-gw9js" Feb 19 10:15:09 crc kubenswrapper[4811]: I0219 10:15:09.796496 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db561e5d-154f-46ca-b479-3be663f44038-config\") pod \"controller-manager-6b8f6c6657-gw9js\" (UID: \"db561e5d-154f-46ca-b479-3be663f44038\") " pod="openshift-controller-manager/controller-manager-6b8f6c6657-gw9js" Feb 19 10:15:09 crc kubenswrapper[4811]: I0219 10:15:09.796663 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db561e5d-154f-46ca-b479-3be663f44038-proxy-ca-bundles\") pod \"controller-manager-6b8f6c6657-gw9js\" (UID: \"db561e5d-154f-46ca-b479-3be663f44038\") " pod="openshift-controller-manager/controller-manager-6b8f6c6657-gw9js" Feb 19 10:15:09 crc kubenswrapper[4811]: I0219 10:15:09.796838 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvflc\" (UniqueName: \"kubernetes.io/projected/db561e5d-154f-46ca-b479-3be663f44038-kube-api-access-zvflc\") pod \"controller-manager-6b8f6c6657-gw9js\" (UID: \"db561e5d-154f-46ca-b479-3be663f44038\") " pod="openshift-controller-manager/controller-manager-6b8f6c6657-gw9js" Feb 19 10:15:09 crc kubenswrapper[4811]: I0219 10:15:09.797015 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db561e5d-154f-46ca-b479-3be663f44038-serving-cert\") pod \"controller-manager-6b8f6c6657-gw9js\" (UID: \"db561e5d-154f-46ca-b479-3be663f44038\") " pod="openshift-controller-manager/controller-manager-6b8f6c6657-gw9js" Feb 19 10:15:09 crc kubenswrapper[4811]: I0219 10:15:09.797411 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db561e5d-154f-46ca-b479-3be663f44038-client-ca\") pod \"controller-manager-6b8f6c6657-gw9js\" (UID: \"db561e5d-154f-46ca-b479-3be663f44038\") " pod="openshift-controller-manager/controller-manager-6b8f6c6657-gw9js" Feb 19 10:15:09 crc kubenswrapper[4811]: I0219 10:15:09.797965 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db561e5d-154f-46ca-b479-3be663f44038-proxy-ca-bundles\") pod \"controller-manager-6b8f6c6657-gw9js\" (UID: \"db561e5d-154f-46ca-b479-3be663f44038\") " pod="openshift-controller-manager/controller-manager-6b8f6c6657-gw9js" Feb 19 10:15:09 crc kubenswrapper[4811]: I0219 10:15:09.798613 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db561e5d-154f-46ca-b479-3be663f44038-config\") pod \"controller-manager-6b8f6c6657-gw9js\" (UID: \"db561e5d-154f-46ca-b479-3be663f44038\") " pod="openshift-controller-manager/controller-manager-6b8f6c6657-gw9js" Feb 19 10:15:09 crc kubenswrapper[4811]: I0219 10:15:09.802351 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db561e5d-154f-46ca-b479-3be663f44038-serving-cert\") pod \"controller-manager-6b8f6c6657-gw9js\" (UID: \"db561e5d-154f-46ca-b479-3be663f44038\") " pod="openshift-controller-manager/controller-manager-6b8f6c6657-gw9js" Feb 19 10:15:09 crc kubenswrapper[4811]: I0219 10:15:09.825104 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvflc\" (UniqueName: \"kubernetes.io/projected/db561e5d-154f-46ca-b479-3be663f44038-kube-api-access-zvflc\") pod \"controller-manager-6b8f6c6657-gw9js\" (UID: \"db561e5d-154f-46ca-b479-3be663f44038\") " pod="openshift-controller-manager/controller-manager-6b8f6c6657-gw9js" Feb 19 10:15:09 crc kubenswrapper[4811]: I0219 10:15:09.851840 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b8f6c6657-gw9js" Feb 19 10:15:10 crc kubenswrapper[4811]: I0219 10:15:10.089732 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b8f6c6657-gw9js"] Feb 19 10:15:10 crc kubenswrapper[4811]: I0219 10:15:10.337071 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b8f6c6657-gw9js" event={"ID":"db561e5d-154f-46ca-b479-3be663f44038","Type":"ContainerStarted","Data":"376f5ae939ad262f082b9c2da2cea534d0225b30b5de973dba4a85ad9252386d"} Feb 19 10:15:10 crc kubenswrapper[4811]: I0219 10:15:10.337128 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b8f6c6657-gw9js" event={"ID":"db561e5d-154f-46ca-b479-3be663f44038","Type":"ContainerStarted","Data":"708350648d9c5de255d77205cff748fbdd77d451ff6bfc16eee5fddad6d8133b"} Feb 19 10:15:10 crc kubenswrapper[4811]: I0219 10:15:10.337564 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6b8f6c6657-gw9js" Feb 19 10:15:10 crc kubenswrapper[4811]: I0219 10:15:10.340007 4811 patch_prober.go:28] interesting pod/controller-manager-6b8f6c6657-gw9js container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" start-of-body= Feb 19 10:15:10 crc kubenswrapper[4811]: I0219 10:15:10.340052 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6b8f6c6657-gw9js" podUID="db561e5d-154f-46ca-b479-3be663f44038" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" Feb 19 10:15:10 crc kubenswrapper[4811]: I0219 10:15:10.364141 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6b8f6c6657-gw9js" podStartSLOduration=2.364119087 podStartE2EDuration="2.364119087s" podCreationTimestamp="2026-02-19 10:15:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:15:10.361496156 +0000 UTC m=+398.887960832" watchObservedRunningTime="2026-02-19 10:15:10.364119087 +0000 UTC m=+398.890583773" Feb 19 10:15:10 crc kubenswrapper[4811]: I0219 10:15:10.515513 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7r9tk" Feb 19 10:15:10 crc kubenswrapper[4811]: I0219 10:15:10.567933 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7r9tk" Feb 19 10:15:10 crc kubenswrapper[4811]: I0219 10:15:10.590531 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a60ff98-02c4-4af6-a628-ffb1f3e8d477" path="/var/lib/kubelet/pods/2a60ff98-02c4-4af6-a628-ffb1f3e8d477/volumes" Feb 19 10:15:11 crc kubenswrapper[4811]: I0219 10:15:11.347963 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6b8f6c6657-gw9js" Feb 19 10:15:14 crc kubenswrapper[4811]: I0219 10:15:14.388034 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-q49nd" Feb 19 10:15:14 crc kubenswrapper[4811]: I0219 10:15:14.450198 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-p4kkk"] Feb 19 10:15:26 crc kubenswrapper[4811]: I0219 10:15:26.787931 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:15:26 crc kubenswrapper[4811]: I0219 10:15:26.788608 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:15:26 crc kubenswrapper[4811]: I0219 10:15:26.788658 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" Feb 19 10:15:26 crc kubenswrapper[4811]: I0219 10:15:26.789197 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f5ee7e0c8452ed8d16d6bb0b1911f3bd3e4bc5692e9f2491a7bdfabe3ba59054"} pod="openshift-machine-config-operator/machine-config-daemon-v97zm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:15:26 crc kubenswrapper[4811]: I0219 10:15:26.789267 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" containerID="cri-o://f5ee7e0c8452ed8d16d6bb0b1911f3bd3e4bc5692e9f2491a7bdfabe3ba59054" gracePeriod=600 Feb 19 10:15:27 crc kubenswrapper[4811]: I0219 10:15:27.455252 4811 generic.go:334] "Generic (PLEG): container finished" podID="e2e42c80-bece-4066-abad-05bc661d33f5" containerID="f5ee7e0c8452ed8d16d6bb0b1911f3bd3e4bc5692e9f2491a7bdfabe3ba59054" exitCode=0 Feb 19 10:15:27 crc kubenswrapper[4811]: I0219 10:15:27.455357 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" event={"ID":"e2e42c80-bece-4066-abad-05bc661d33f5","Type":"ContainerDied","Data":"f5ee7e0c8452ed8d16d6bb0b1911f3bd3e4bc5692e9f2491a7bdfabe3ba59054"} Feb 19 10:15:27 crc kubenswrapper[4811]: I0219 10:15:27.455975 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" event={"ID":"e2e42c80-bece-4066-abad-05bc661d33f5","Type":"ContainerStarted","Data":"726804576e37e57d9029714747ad105ed4b9c390bdf00d5b9989e36faf5e4955"} Feb 19 10:15:27 crc kubenswrapper[4811]: I0219 10:15:27.455994 4811 scope.go:117] "RemoveContainer" containerID="85f8509a3d8137115d3d67acb4eaac4f9bd1b1c9e7d75f23f9a11929d501b397" Feb 19 10:15:32 crc kubenswrapper[4811]: I0219 10:15:32.707697 4811 scope.go:117] "RemoveContainer" containerID="6a4b8138d1d6b88a57900c770e6553635cf6712d2a43061747bf2e1fe18f4a34" Feb 19 10:15:32 crc kubenswrapper[4811]: I0219 10:15:32.729863 4811 scope.go:117] "RemoveContainer" containerID="6d0f17a5a12ced0609f0d66c1c2ae6b6595b8fbf33fb199ad2c215e437312454" Feb 19 10:15:32 crc kubenswrapper[4811]: I0219 10:15:32.759505 4811 scope.go:117] "RemoveContainer" containerID="3563cb6a1b6e85c7da6f8b82d4eb14ac2445add14cd67ff59b724bdd75f805a6" Feb 19 10:15:32 crc kubenswrapper[4811]: I0219 10:15:32.839048 4811 scope.go:117] "RemoveContainer" containerID="71432123bed3881564d66d0157b81ff594ebd504c366ae140b40624f19dc6692" Feb 19 10:15:32 crc kubenswrapper[4811]: I0219 10:15:32.859052 4811 scope.go:117] "RemoveContainer" containerID="63e2289ebc3325277881df161634bab3c20687bdc08d9653a87ccfc9967ce4c0" Feb 19 10:15:32 crc kubenswrapper[4811]: I0219 10:15:32.872158 4811 scope.go:117] "RemoveContainer" containerID="6b8881eeefd639fb5dea6263f1b793a99d627b6597dd0f3f716c5eea0eff7a53" Feb 19 10:15:39 crc kubenswrapper[4811]: I0219 10:15:39.503001 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" podUID="d9c16aff-60a4-404a-b3f6-bc951d28e931" containerName="registry" containerID="cri-o://7afb590705614f8176e5162731cfee9190cae16955eddea51e58a9bc3fbc9250" gracePeriod=30 Feb 19 10:15:39 crc kubenswrapper[4811]: I0219 10:15:39.985725 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:15:40 crc kubenswrapper[4811]: I0219 10:15:40.049542 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d9c16aff-60a4-404a-b3f6-bc951d28e931-registry-certificates\") pod \"d9c16aff-60a4-404a-b3f6-bc951d28e931\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " Feb 19 10:15:40 crc kubenswrapper[4811]: I0219 10:15:40.049666 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9c16aff-60a4-404a-b3f6-bc951d28e931-trusted-ca\") pod \"d9c16aff-60a4-404a-b3f6-bc951d28e931\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " Feb 19 10:15:40 crc kubenswrapper[4811]: I0219 10:15:40.049708 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d9c16aff-60a4-404a-b3f6-bc951d28e931-installation-pull-secrets\") pod \"d9c16aff-60a4-404a-b3f6-bc951d28e931\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " Feb 19 10:15:40 crc kubenswrapper[4811]: I0219 10:15:40.049824 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d9c16aff-60a4-404a-b3f6-bc951d28e931-registry-tls\") pod \"d9c16aff-60a4-404a-b3f6-bc951d28e931\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " Feb 19 10:15:40 crc kubenswrapper[4811]: I0219 10:15:40.049874 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtrdh\" (UniqueName: \"kubernetes.io/projected/d9c16aff-60a4-404a-b3f6-bc951d28e931-kube-api-access-wtrdh\") pod \"d9c16aff-60a4-404a-b3f6-bc951d28e931\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " Feb 19 10:15:40 crc kubenswrapper[4811]: I0219 10:15:40.049918 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d9c16aff-60a4-404a-b3f6-bc951d28e931-bound-sa-token\") pod \"d9c16aff-60a4-404a-b3f6-bc951d28e931\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " Feb 19 10:15:40 crc kubenswrapper[4811]: I0219 10:15:40.050194 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"d9c16aff-60a4-404a-b3f6-bc951d28e931\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " Feb 19 10:15:40 crc kubenswrapper[4811]: I0219 10:15:40.050246 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d9c16aff-60a4-404a-b3f6-bc951d28e931-ca-trust-extracted\") pod \"d9c16aff-60a4-404a-b3f6-bc951d28e931\" (UID: \"d9c16aff-60a4-404a-b3f6-bc951d28e931\") " Feb 19 10:15:40 crc kubenswrapper[4811]: I0219 10:15:40.050707 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9c16aff-60a4-404a-b3f6-bc951d28e931-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "d9c16aff-60a4-404a-b3f6-bc951d28e931" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:15:40 crc kubenswrapper[4811]: I0219 10:15:40.051042 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9c16aff-60a4-404a-b3f6-bc951d28e931-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "d9c16aff-60a4-404a-b3f6-bc951d28e931" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:15:40 crc kubenswrapper[4811]: I0219 10:15:40.059065 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9c16aff-60a4-404a-b3f6-bc951d28e931-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "d9c16aff-60a4-404a-b3f6-bc951d28e931" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:15:40 crc kubenswrapper[4811]: I0219 10:15:40.059596 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9c16aff-60a4-404a-b3f6-bc951d28e931-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "d9c16aff-60a4-404a-b3f6-bc951d28e931" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:15:40 crc kubenswrapper[4811]: I0219 10:15:40.059985 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9c16aff-60a4-404a-b3f6-bc951d28e931-kube-api-access-wtrdh" (OuterVolumeSpecName: "kube-api-access-wtrdh") pod "d9c16aff-60a4-404a-b3f6-bc951d28e931" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931"). InnerVolumeSpecName "kube-api-access-wtrdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:15:40 crc kubenswrapper[4811]: I0219 10:15:40.060168 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9c16aff-60a4-404a-b3f6-bc951d28e931-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "d9c16aff-60a4-404a-b3f6-bc951d28e931" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:15:40 crc kubenswrapper[4811]: I0219 10:15:40.064895 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "d9c16aff-60a4-404a-b3f6-bc951d28e931" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 10:15:40 crc kubenswrapper[4811]: I0219 10:15:40.069416 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9c16aff-60a4-404a-b3f6-bc951d28e931-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "d9c16aff-60a4-404a-b3f6-bc951d28e931" (UID: "d9c16aff-60a4-404a-b3f6-bc951d28e931"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:15:40 crc kubenswrapper[4811]: I0219 10:15:40.152008 4811 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9c16aff-60a4-404a-b3f6-bc951d28e931-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:40 crc kubenswrapper[4811]: I0219 10:15:40.152070 4811 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d9c16aff-60a4-404a-b3f6-bc951d28e931-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:40 crc kubenswrapper[4811]: I0219 10:15:40.152085 4811 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d9c16aff-60a4-404a-b3f6-bc951d28e931-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:40 crc kubenswrapper[4811]: I0219 10:15:40.152095 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtrdh\" (UniqueName: \"kubernetes.io/projected/d9c16aff-60a4-404a-b3f6-bc951d28e931-kube-api-access-wtrdh\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:40 crc kubenswrapper[4811]: I0219 10:15:40.152105 4811 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d9c16aff-60a4-404a-b3f6-bc951d28e931-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:40 crc kubenswrapper[4811]: I0219 10:15:40.152115 4811 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d9c16aff-60a4-404a-b3f6-bc951d28e931-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:40 crc kubenswrapper[4811]: I0219 10:15:40.152124 4811 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d9c16aff-60a4-404a-b3f6-bc951d28e931-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 19 10:15:40 crc kubenswrapper[4811]: I0219 10:15:40.528574 4811 generic.go:334] "Generic (PLEG): container finished" podID="d9c16aff-60a4-404a-b3f6-bc951d28e931" containerID="7afb590705614f8176e5162731cfee9190cae16955eddea51e58a9bc3fbc9250" exitCode=0 Feb 19 10:15:40 crc kubenswrapper[4811]: I0219 10:15:40.528615 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" event={"ID":"d9c16aff-60a4-404a-b3f6-bc951d28e931","Type":"ContainerDied","Data":"7afb590705614f8176e5162731cfee9190cae16955eddea51e58a9bc3fbc9250"} Feb 19 10:15:40 crc kubenswrapper[4811]: I0219 10:15:40.528648 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" event={"ID":"d9c16aff-60a4-404a-b3f6-bc951d28e931","Type":"ContainerDied","Data":"de481ca995a290e0279ace06a4fd1b03b2f783eb4c9b85c17e32501f8760a734"} Feb 19 10:15:40 crc kubenswrapper[4811]: I0219 10:15:40.528648 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-p4kkk" Feb 19 10:15:40 crc kubenswrapper[4811]: I0219 10:15:40.528691 4811 scope.go:117] "RemoveContainer" containerID="7afb590705614f8176e5162731cfee9190cae16955eddea51e58a9bc3fbc9250" Feb 19 10:15:40 crc kubenswrapper[4811]: I0219 10:15:40.547905 4811 scope.go:117] "RemoveContainer" containerID="7afb590705614f8176e5162731cfee9190cae16955eddea51e58a9bc3fbc9250" Feb 19 10:15:40 crc kubenswrapper[4811]: E0219 10:15:40.548520 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7afb590705614f8176e5162731cfee9190cae16955eddea51e58a9bc3fbc9250\": container with ID starting with 7afb590705614f8176e5162731cfee9190cae16955eddea51e58a9bc3fbc9250 not found: ID does not exist" containerID="7afb590705614f8176e5162731cfee9190cae16955eddea51e58a9bc3fbc9250" Feb 19 10:15:40 crc kubenswrapper[4811]: I0219 10:15:40.548575 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7afb590705614f8176e5162731cfee9190cae16955eddea51e58a9bc3fbc9250"} err="failed to get container status \"7afb590705614f8176e5162731cfee9190cae16955eddea51e58a9bc3fbc9250\": rpc error: code = NotFound desc = could not find container \"7afb590705614f8176e5162731cfee9190cae16955eddea51e58a9bc3fbc9250\": container with ID starting with 7afb590705614f8176e5162731cfee9190cae16955eddea51e58a9bc3fbc9250 not found: ID does not exist" Feb 19 10:15:40 crc kubenswrapper[4811]: I0219 10:15:40.562807 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-p4kkk"] Feb 19 10:15:40 crc kubenswrapper[4811]: I0219 10:15:40.565984 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-p4kkk"] Feb 19 10:15:40 crc kubenswrapper[4811]: I0219 10:15:40.590877 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9c16aff-60a4-404a-b3f6-bc951d28e931" path="/var/lib/kubelet/pods/d9c16aff-60a4-404a-b3f6-bc951d28e931/volumes" Feb 19 10:17:56 crc kubenswrapper[4811]: I0219 10:17:56.788355 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:17:56 crc kubenswrapper[4811]: I0219 10:17:56.789021 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:18:26 crc kubenswrapper[4811]: I0219 10:18:26.788573 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:18:26 crc kubenswrapper[4811]: I0219 10:18:26.789099 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:18:56 crc kubenswrapper[4811]: I0219 10:18:56.788037 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:18:56 crc kubenswrapper[4811]: I0219 10:18:56.788727 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:18:56 crc kubenswrapper[4811]: I0219 10:18:56.788792 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" Feb 19 10:18:56 crc kubenswrapper[4811]: I0219 10:18:56.789514 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"726804576e37e57d9029714747ad105ed4b9c390bdf00d5b9989e36faf5e4955"} pod="openshift-machine-config-operator/machine-config-daemon-v97zm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:18:56 crc kubenswrapper[4811]: I0219 10:18:56.789592 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" containerID="cri-o://726804576e37e57d9029714747ad105ed4b9c390bdf00d5b9989e36faf5e4955" gracePeriod=600 Feb 19 10:18:57 crc kubenswrapper[4811]: I0219 10:18:57.758603 4811 generic.go:334] "Generic (PLEG): container finished" podID="e2e42c80-bece-4066-abad-05bc661d33f5" containerID="726804576e37e57d9029714747ad105ed4b9c390bdf00d5b9989e36faf5e4955" exitCode=0 Feb 19 10:18:57 crc kubenswrapper[4811]: I0219 10:18:57.758758 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" event={"ID":"e2e42c80-bece-4066-abad-05bc661d33f5","Type":"ContainerDied","Data":"726804576e37e57d9029714747ad105ed4b9c390bdf00d5b9989e36faf5e4955"} Feb 19 10:18:57 crc kubenswrapper[4811]: I0219 10:18:57.759009 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" event={"ID":"e2e42c80-bece-4066-abad-05bc661d33f5","Type":"ContainerStarted","Data":"193ac12085ccacd5c18931cdeb481fcee9513ec65e0023789373cfc6e16223d8"} Feb 19 10:18:57 crc kubenswrapper[4811]: I0219 10:18:57.759026 4811 scope.go:117] "RemoveContainer" containerID="f5ee7e0c8452ed8d16d6bb0b1911f3bd3e4bc5692e9f2491a7bdfabe3ba59054" Feb 19 10:20:02 crc kubenswrapper[4811]: I0219 10:20:02.906935 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-npddf"] Feb 19 10:20:02 crc kubenswrapper[4811]: E0219 10:20:02.907569 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9c16aff-60a4-404a-b3f6-bc951d28e931" containerName="registry" Feb 19 10:20:02 crc kubenswrapper[4811]: I0219 10:20:02.907582 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9c16aff-60a4-404a-b3f6-bc951d28e931" containerName="registry" Feb 19 10:20:02 crc kubenswrapper[4811]: I0219 10:20:02.907693 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9c16aff-60a4-404a-b3f6-bc951d28e931" containerName="registry" Feb 19 10:20:02 crc kubenswrapper[4811]: I0219 10:20:02.908060 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-npddf" Feb 19 10:20:02 crc kubenswrapper[4811]: I0219 10:20:02.910938 4811 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-9csb4" Feb 19 10:20:02 crc kubenswrapper[4811]: I0219 10:20:02.911205 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 19 10:20:02 crc kubenswrapper[4811]: I0219 10:20:02.912082 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 19 10:20:02 crc kubenswrapper[4811]: I0219 10:20:02.924272 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-wh2j5"] Feb 19 10:20:02 crc kubenswrapper[4811]: I0219 10:20:02.925191 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-wh2j5" Feb 19 10:20:02 crc kubenswrapper[4811]: I0219 10:20:02.933533 4811 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-mpg8r" Feb 19 10:20:02 crc kubenswrapper[4811]: I0219 10:20:02.937918 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-npddf"] Feb 19 10:20:02 crc kubenswrapper[4811]: I0219 10:20:02.944907 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-vl4s4"] Feb 19 10:20:02 crc kubenswrapper[4811]: I0219 10:20:02.945737 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-vl4s4" Feb 19 10:20:02 crc kubenswrapper[4811]: I0219 10:20:02.947408 4811 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-xzcbt" Feb 19 10:20:02 crc kubenswrapper[4811]: I0219 10:20:02.955690 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-wh2j5"] Feb 19 10:20:02 crc kubenswrapper[4811]: I0219 10:20:02.986056 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-vl4s4"] Feb 19 10:20:03 crc kubenswrapper[4811]: I0219 10:20:03.018490 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsjlx\" (UniqueName: \"kubernetes.io/projected/09c94fb6-b7d0-434f-8f41-d8dc7ff56e76-kube-api-access-gsjlx\") pod \"cert-manager-858654f9db-npddf\" (UID: \"09c94fb6-b7d0-434f-8f41-d8dc7ff56e76\") " pod="cert-manager/cert-manager-858654f9db-npddf" Feb 19 10:20:03 crc kubenswrapper[4811]: I0219 10:20:03.119385 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk7cg\" (UniqueName: \"kubernetes.io/projected/c5c3873a-d5b3-4fb9-8ac2-118ae63676a0-kube-api-access-mk7cg\") pod \"cert-manager-cainjector-cf98fcc89-wh2j5\" (UID: \"c5c3873a-d5b3-4fb9-8ac2-118ae63676a0\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-wh2j5" Feb 19 10:20:03 crc kubenswrapper[4811]: I0219 10:20:03.119515 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsjlx\" (UniqueName: \"kubernetes.io/projected/09c94fb6-b7d0-434f-8f41-d8dc7ff56e76-kube-api-access-gsjlx\") pod \"cert-manager-858654f9db-npddf\" (UID: \"09c94fb6-b7d0-434f-8f41-d8dc7ff56e76\") " pod="cert-manager/cert-manager-858654f9db-npddf" Feb 19 10:20:03 crc kubenswrapper[4811]: I0219 10:20:03.119572 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sc5g\" (UniqueName: \"kubernetes.io/projected/d9b74399-e5df-4e67-b52d-85b37e1a5125-kube-api-access-6sc5g\") pod \"cert-manager-webhook-687f57d79b-vl4s4\" (UID: \"d9b74399-e5df-4e67-b52d-85b37e1a5125\") " pod="cert-manager/cert-manager-webhook-687f57d79b-vl4s4" Feb 19 10:20:03 crc kubenswrapper[4811]: I0219 10:20:03.145308 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsjlx\" (UniqueName: \"kubernetes.io/projected/09c94fb6-b7d0-434f-8f41-d8dc7ff56e76-kube-api-access-gsjlx\") pod \"cert-manager-858654f9db-npddf\" (UID: \"09c94fb6-b7d0-434f-8f41-d8dc7ff56e76\") " pod="cert-manager/cert-manager-858654f9db-npddf" Feb 19 10:20:03 crc kubenswrapper[4811]: I0219 10:20:03.220205 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk7cg\" (UniqueName: \"kubernetes.io/projected/c5c3873a-d5b3-4fb9-8ac2-118ae63676a0-kube-api-access-mk7cg\") pod \"cert-manager-cainjector-cf98fcc89-wh2j5\" (UID: \"c5c3873a-d5b3-4fb9-8ac2-118ae63676a0\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-wh2j5" Feb 19 10:20:03 crc kubenswrapper[4811]: I0219 10:20:03.220672 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sc5g\" (UniqueName: \"kubernetes.io/projected/d9b74399-e5df-4e67-b52d-85b37e1a5125-kube-api-access-6sc5g\") pod \"cert-manager-webhook-687f57d79b-vl4s4\" (UID: \"d9b74399-e5df-4e67-b52d-85b37e1a5125\") " pod="cert-manager/cert-manager-webhook-687f57d79b-vl4s4" Feb 19 10:20:03 crc kubenswrapper[4811]: I0219 10:20:03.232359 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-npddf" Feb 19 10:20:03 crc kubenswrapper[4811]: I0219 10:20:03.238135 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sc5g\" (UniqueName: \"kubernetes.io/projected/d9b74399-e5df-4e67-b52d-85b37e1a5125-kube-api-access-6sc5g\") pod \"cert-manager-webhook-687f57d79b-vl4s4\" (UID: \"d9b74399-e5df-4e67-b52d-85b37e1a5125\") " pod="cert-manager/cert-manager-webhook-687f57d79b-vl4s4" Feb 19 10:20:03 crc kubenswrapper[4811]: I0219 10:20:03.243179 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk7cg\" (UniqueName: \"kubernetes.io/projected/c5c3873a-d5b3-4fb9-8ac2-118ae63676a0-kube-api-access-mk7cg\") pod \"cert-manager-cainjector-cf98fcc89-wh2j5\" (UID: \"c5c3873a-d5b3-4fb9-8ac2-118ae63676a0\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-wh2j5" Feb 19 10:20:03 crc kubenswrapper[4811]: I0219 10:20:03.245746 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-wh2j5" Feb 19 10:20:03 crc kubenswrapper[4811]: I0219 10:20:03.264070 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-vl4s4" Feb 19 10:20:03 crc kubenswrapper[4811]: I0219 10:20:03.476881 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-npddf"] Feb 19 10:20:03 crc kubenswrapper[4811]: I0219 10:20:03.493505 4811 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 10:20:03 crc kubenswrapper[4811]: I0219 10:20:03.722209 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-vl4s4"] Feb 19 10:20:03 crc kubenswrapper[4811]: W0219 10:20:03.729022 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9b74399_e5df_4e67_b52d_85b37e1a5125.slice/crio-431c17f75020cf31afeea6ba7368d1b655402a37e473101be34eb1429fa282d4 WatchSource:0}: Error finding container 431c17f75020cf31afeea6ba7368d1b655402a37e473101be34eb1429fa282d4: Status 404 returned error can't find the container with id 431c17f75020cf31afeea6ba7368d1b655402a37e473101be34eb1429fa282d4 Feb 19 10:20:03 crc kubenswrapper[4811]: I0219 10:20:03.734782 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-wh2j5"] Feb 19 10:20:03 crc kubenswrapper[4811]: W0219 10:20:03.738088 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5c3873a_d5b3_4fb9_8ac2_118ae63676a0.slice/crio-f63bd0498e88eed5cd977edbc62c9765601349f28139ed96aa6d53999d2b6aef WatchSource:0}: Error finding container f63bd0498e88eed5cd977edbc62c9765601349f28139ed96aa6d53999d2b6aef: Status 404 returned error can't find the container with id f63bd0498e88eed5cd977edbc62c9765601349f28139ed96aa6d53999d2b6aef Feb 19 10:20:04 crc kubenswrapper[4811]: I0219 10:20:04.174603 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-wh2j5" event={"ID":"c5c3873a-d5b3-4fb9-8ac2-118ae63676a0","Type":"ContainerStarted","Data":"f63bd0498e88eed5cd977edbc62c9765601349f28139ed96aa6d53999d2b6aef"} Feb 19 10:20:04 crc kubenswrapper[4811]: I0219 10:20:04.175718 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-vl4s4" event={"ID":"d9b74399-e5df-4e67-b52d-85b37e1a5125","Type":"ContainerStarted","Data":"431c17f75020cf31afeea6ba7368d1b655402a37e473101be34eb1429fa282d4"} Feb 19 10:20:04 crc kubenswrapper[4811]: I0219 10:20:04.176718 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-npddf" event={"ID":"09c94fb6-b7d0-434f-8f41-d8dc7ff56e76","Type":"ContainerStarted","Data":"cd50ca67e39af6bc667d68a2bbb9700d255f8d58340a21ac056c3c84c656a939"} Feb 19 10:20:07 crc kubenswrapper[4811]: I0219 10:20:07.198597 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-npddf" event={"ID":"09c94fb6-b7d0-434f-8f41-d8dc7ff56e76","Type":"ContainerStarted","Data":"0be078d33d578cab32e0c1ae9bfabbb99e66b1b0438c78d84afcc7699cc0a33a"} Feb 19 10:20:07 crc kubenswrapper[4811]: I0219 10:20:07.216277 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-npddf" podStartSLOduration=1.9086312539999999 podStartE2EDuration="5.216246983s" podCreationTimestamp="2026-02-19 10:20:02 +0000 UTC" firstStartedPulling="2026-02-19 10:20:03.493232464 +0000 UTC m=+692.019697150" lastFinishedPulling="2026-02-19 10:20:06.800848203 +0000 UTC m=+695.327312879" observedRunningTime="2026-02-19 10:20:07.214621061 +0000 UTC m=+695.741085757" watchObservedRunningTime="2026-02-19 10:20:07.216246983 +0000 UTC m=+695.742711669" Feb 19 10:20:08 crc kubenswrapper[4811]: I0219 10:20:08.206056 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-wh2j5" event={"ID":"c5c3873a-d5b3-4fb9-8ac2-118ae63676a0","Type":"ContainerStarted","Data":"9df6080d4279099aef5c0c68e1bf2d75f7df1c7088f01e8d2c2feebbed551814"} Feb 19 10:20:08 crc kubenswrapper[4811]: I0219 10:20:08.224490 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-wh2j5" podStartSLOduration=1.9438658009999998 podStartE2EDuration="6.224472442s" podCreationTimestamp="2026-02-19 10:20:02 +0000 UTC" firstStartedPulling="2026-02-19 10:20:03.740726784 +0000 UTC m=+692.267191460" lastFinishedPulling="2026-02-19 10:20:08.021333415 +0000 UTC m=+696.547798101" observedRunningTime="2026-02-19 10:20:08.221374171 +0000 UTC m=+696.747838857" watchObservedRunningTime="2026-02-19 10:20:08.224472442 +0000 UTC m=+696.750937128" Feb 19 10:20:09 crc kubenswrapper[4811]: I0219 10:20:09.214185 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-vl4s4" event={"ID":"d9b74399-e5df-4e67-b52d-85b37e1a5125","Type":"ContainerStarted","Data":"b73ad41c677f9448a436e0a1c90aed7a3abde683c943a5f88829f9588a29fc91"} Feb 19 10:20:09 crc kubenswrapper[4811]: I0219 10:20:09.214315 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-vl4s4" Feb 19 10:20:09 crc kubenswrapper[4811]: I0219 10:20:09.231647 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-vl4s4" podStartSLOduration=2.863832832 podStartE2EDuration="7.231628023s" podCreationTimestamp="2026-02-19 10:20:02 +0000 UTC" firstStartedPulling="2026-02-19 10:20:03.730512418 +0000 UTC m=+692.256977104" lastFinishedPulling="2026-02-19 10:20:08.098307569 +0000 UTC m=+696.624772295" observedRunningTime="2026-02-19 10:20:09.227188738 +0000 UTC m=+697.753653444" watchObservedRunningTime="2026-02-19 10:20:09.231628023 +0000 UTC m=+697.758092709" Feb 19 10:20:13 crc kubenswrapper[4811]: I0219 10:20:13.267482 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-vl4s4" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.170771 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ln785"] Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.172513 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="ovn-controller" containerID="cri-o://99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860" gracePeriod=30 Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.172596 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="sbdb" containerID="cri-o://1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b" gracePeriod=30 Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.172718 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="kube-rbac-proxy-node" containerID="cri-o://6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4" gracePeriod=30 Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.172737 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="northd" containerID="cri-o://f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb" gracePeriod=30 Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.172550 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="nbdb" containerID="cri-o://36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45" gracePeriod=30 Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.172730 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="ovn-acl-logging" containerID="cri-o://67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e" gracePeriod=30 Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.172951 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c" gracePeriod=30 Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.245622 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="ovnkube-controller" containerID="cri-o://3fbbf3ae488626aae81f663ff9cfe730b383c8e1b2557692987f449a9b9dbbd9" gracePeriod=30 Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.353245 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lxbrg_9766aa7e-a66b-4efc-a5de-2fe49ef00eb8/kube-multus/2.log" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.353787 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lxbrg_9766aa7e-a66b-4efc-a5de-2fe49ef00eb8/kube-multus/1.log" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.353863 4811 generic.go:334] "Generic (PLEG): container finished" podID="9766aa7e-a66b-4efc-a5de-2fe49ef00eb8" containerID="8a4885b0091a928d075b1754c06a54a38c4cec164e586a2e7cb86b25fc90f049" exitCode=2 Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.353944 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lxbrg" event={"ID":"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8","Type":"ContainerDied","Data":"8a4885b0091a928d075b1754c06a54a38c4cec164e586a2e7cb86b25fc90f049"} Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.354004 4811 scope.go:117] "RemoveContainer" containerID="5467aad5459e55d3a7720c8a92bcabb265205bd73be8b21ab2023463f50066cd" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.354761 4811 scope.go:117] "RemoveContainer" containerID="8a4885b0091a928d075b1754c06a54a38c4cec164e586a2e7cb86b25fc90f049" Feb 19 10:20:28 crc kubenswrapper[4811]: E0219 10:20:28.355256 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-lxbrg_openshift-multus(9766aa7e-a66b-4efc-a5de-2fe49ef00eb8)\"" pod="openshift-multus/multus-lxbrg" podUID="9766aa7e-a66b-4efc-a5de-2fe49ef00eb8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.363572 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ln785_554fedad-397a-4827-b5ad-48a681a7a2e5/ovnkube-controller/3.log" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.367443 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ln785_554fedad-397a-4827-b5ad-48a681a7a2e5/ovn-acl-logging/0.log" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.368012 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ln785_554fedad-397a-4827-b5ad-48a681a7a2e5/ovn-controller/0.log" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.368431 4811 generic.go:334] "Generic (PLEG): container finished" podID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerID="85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c" exitCode=0 Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.368490 4811 generic.go:334] "Generic (PLEG): container finished" podID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerID="6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4" exitCode=0 Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.368500 4811 generic.go:334] "Generic (PLEG): container finished" podID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerID="67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e" exitCode=143 Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.368511 4811 generic.go:334] "Generic (PLEG): container finished" podID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerID="99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860" exitCode=143 Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.368541 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" event={"ID":"554fedad-397a-4827-b5ad-48a681a7a2e5","Type":"ContainerDied","Data":"85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c"} Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.368576 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" event={"ID":"554fedad-397a-4827-b5ad-48a681a7a2e5","Type":"ContainerDied","Data":"6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4"} Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.368596 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" event={"ID":"554fedad-397a-4827-b5ad-48a681a7a2e5","Type":"ContainerDied","Data":"67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e"} Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.368605 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" event={"ID":"554fedad-397a-4827-b5ad-48a681a7a2e5","Type":"ContainerDied","Data":"99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860"} Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.564186 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ln785_554fedad-397a-4827-b5ad-48a681a7a2e5/ovnkube-controller/3.log" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.566506 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ln785_554fedad-397a-4827-b5ad-48a681a7a2e5/ovn-acl-logging/0.log" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.567050 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ln785_554fedad-397a-4827-b5ad-48a681a7a2e5/ovn-controller/0.log" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.567463 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.633079 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fv7h8"] Feb 19 10:20:28 crc kubenswrapper[4811]: E0219 10:20:28.633847 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="ovn-controller" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.633863 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="ovn-controller" Feb 19 10:20:28 crc kubenswrapper[4811]: E0219 10:20:28.633876 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="ovn-acl-logging" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.633887 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="ovn-acl-logging" Feb 19 10:20:28 crc kubenswrapper[4811]: E0219 10:20:28.633899 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="ovnkube-controller" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.633906 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="ovnkube-controller" Feb 19 10:20:28 crc kubenswrapper[4811]: E0219 10:20:28.633919 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.633925 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 10:20:28 crc kubenswrapper[4811]: E0219 10:20:28.633935 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="sbdb" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.633943 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="sbdb" Feb 19 10:20:28 crc kubenswrapper[4811]: E0219 10:20:28.633952 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="kube-rbac-proxy-node" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.633959 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="kube-rbac-proxy-node" Feb 19 10:20:28 crc kubenswrapper[4811]: E0219 10:20:28.633970 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="ovnkube-controller" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.633977 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="ovnkube-controller" Feb 19 10:20:28 crc kubenswrapper[4811]: E0219 10:20:28.633986 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="ovnkube-controller" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.633997 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="ovnkube-controller" Feb 19 10:20:28 crc kubenswrapper[4811]: E0219 10:20:28.634008 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="ovnkube-controller" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.634016 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="ovnkube-controller" Feb 19 10:20:28 crc kubenswrapper[4811]: E0219 10:20:28.634030 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="nbdb" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.634037 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="nbdb" Feb 19 10:20:28 crc kubenswrapper[4811]: E0219 10:20:28.634047 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="northd" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.634053 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="northd" Feb 19 10:20:28 crc kubenswrapper[4811]: E0219 10:20:28.634064 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="kubecfg-setup" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.634071 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="kubecfg-setup" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.634195 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="ovnkube-controller" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.634206 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="ovnkube-controller" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.634215 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.634225 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="ovn-acl-logging" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.634233 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="northd" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.634241 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="kube-rbac-proxy-node" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.634249 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="ovn-controller" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.634258 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="nbdb" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.634267 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="ovnkube-controller" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.634277 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="sbdb" Feb 19 10:20:28 crc kubenswrapper[4811]: E0219 10:20:28.634393 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="ovnkube-controller" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.634401 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="ovnkube-controller" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.634522 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="ovnkube-controller" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.634537 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerName="ovnkube-controller" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.636591 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.697561 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/554fedad-397a-4827-b5ad-48a681a7a2e5-ovnkube-script-lib\") pod \"554fedad-397a-4827-b5ad-48a681a7a2e5\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.697637 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkbcd\" (UniqueName: \"kubernetes.io/projected/554fedad-397a-4827-b5ad-48a681a7a2e5-kube-api-access-mkbcd\") pod \"554fedad-397a-4827-b5ad-48a681a7a2e5\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.698151 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-cni-bin\") pod \"554fedad-397a-4827-b5ad-48a681a7a2e5\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.698183 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-run-openvswitch\") pod \"554fedad-397a-4827-b5ad-48a681a7a2e5\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.698217 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-var-lib-openvswitch\") pod \"554fedad-397a-4827-b5ad-48a681a7a2e5\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.698249 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-run-ovn\") pod \"554fedad-397a-4827-b5ad-48a681a7a2e5\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.698264 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "554fedad-397a-4827-b5ad-48a681a7a2e5" (UID: "554fedad-397a-4827-b5ad-48a681a7a2e5"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.698278 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-slash\") pod \"554fedad-397a-4827-b5ad-48a681a7a2e5\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.698253 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/554fedad-397a-4827-b5ad-48a681a7a2e5-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "554fedad-397a-4827-b5ad-48a681a7a2e5" (UID: "554fedad-397a-4827-b5ad-48a681a7a2e5"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.698357 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-slash" (OuterVolumeSpecName: "host-slash") pod "554fedad-397a-4827-b5ad-48a681a7a2e5" (UID: "554fedad-397a-4827-b5ad-48a681a7a2e5"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.698377 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "554fedad-397a-4827-b5ad-48a681a7a2e5" (UID: "554fedad-397a-4827-b5ad-48a681a7a2e5"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.698313 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "554fedad-397a-4827-b5ad-48a681a7a2e5" (UID: "554fedad-397a-4827-b5ad-48a681a7a2e5"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.698313 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "554fedad-397a-4827-b5ad-48a681a7a2e5" (UID: "554fedad-397a-4827-b5ad-48a681a7a2e5"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.698335 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "554fedad-397a-4827-b5ad-48a681a7a2e5" (UID: "554fedad-397a-4827-b5ad-48a681a7a2e5"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.698350 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-etc-openvswitch\") pod \"554fedad-397a-4827-b5ad-48a681a7a2e5\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.698472 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-run-ovn-kubernetes\") pod \"554fedad-397a-4827-b5ad-48a681a7a2e5\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.698502 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-cni-netd\") pod \"554fedad-397a-4827-b5ad-48a681a7a2e5\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.698528 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-kubelet\") pod \"554fedad-397a-4827-b5ad-48a681a7a2e5\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.698561 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/554fedad-397a-4827-b5ad-48a681a7a2e5-env-overrides\") pod \"554fedad-397a-4827-b5ad-48a681a7a2e5\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.698567 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "554fedad-397a-4827-b5ad-48a681a7a2e5" (UID: "554fedad-397a-4827-b5ad-48a681a7a2e5"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.698600 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "554fedad-397a-4827-b5ad-48a681a7a2e5" (UID: "554fedad-397a-4827-b5ad-48a681a7a2e5"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.698608 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-systemd-units\") pod \"554fedad-397a-4827-b5ad-48a681a7a2e5\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.698602 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "554fedad-397a-4827-b5ad-48a681a7a2e5" (UID: "554fedad-397a-4827-b5ad-48a681a7a2e5"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.698635 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "554fedad-397a-4827-b5ad-48a681a7a2e5" (UID: "554fedad-397a-4827-b5ad-48a681a7a2e5"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.698663 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/554fedad-397a-4827-b5ad-48a681a7a2e5-ovn-node-metrics-cert\") pod \"554fedad-397a-4827-b5ad-48a681a7a2e5\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.698715 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-run-netns\") pod \"554fedad-397a-4827-b5ad-48a681a7a2e5\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.698749 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-run-systemd\") pod \"554fedad-397a-4827-b5ad-48a681a7a2e5\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.698777 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-node-log\") pod \"554fedad-397a-4827-b5ad-48a681a7a2e5\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.698781 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "554fedad-397a-4827-b5ad-48a681a7a2e5" (UID: "554fedad-397a-4827-b5ad-48a681a7a2e5"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.698806 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-log-socket\") pod \"554fedad-397a-4827-b5ad-48a681a7a2e5\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.698834 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/554fedad-397a-4827-b5ad-48a681a7a2e5-ovnkube-config\") pod \"554fedad-397a-4827-b5ad-48a681a7a2e5\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.698857 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-log-socket" (OuterVolumeSpecName: "log-socket") pod "554fedad-397a-4827-b5ad-48a681a7a2e5" (UID: "554fedad-397a-4827-b5ad-48a681a7a2e5"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.698867 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-node-log" (OuterVolumeSpecName: "node-log") pod "554fedad-397a-4827-b5ad-48a681a7a2e5" (UID: "554fedad-397a-4827-b5ad-48a681a7a2e5"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.698878 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"554fedad-397a-4827-b5ad-48a681a7a2e5\" (UID: \"554fedad-397a-4827-b5ad-48a681a7a2e5\") " Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.698998 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "554fedad-397a-4827-b5ad-48a681a7a2e5" (UID: "554fedad-397a-4827-b5ad-48a681a7a2e5"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.699122 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-host-kubelet\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.699049 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/554fedad-397a-4827-b5ad-48a681a7a2e5-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "554fedad-397a-4827-b5ad-48a681a7a2e5" (UID: "554fedad-397a-4827-b5ad-48a681a7a2e5"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.699164 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-host-cni-bin\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.699192 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-host-run-ovn-kubernetes\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.699221 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-ovnkube-script-lib\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.699229 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/554fedad-397a-4827-b5ad-48a681a7a2e5-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "554fedad-397a-4827-b5ad-48a681a7a2e5" (UID: "554fedad-397a-4827-b5ad-48a681a7a2e5"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.699277 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-var-lib-openvswitch\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.699308 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-systemd-units\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.699335 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-run-ovn\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.699361 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-ovnkube-config\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.699389 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-host-slash\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.699411 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-host-run-netns\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.699484 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htbf4\" (UniqueName: \"kubernetes.io/projected/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-kube-api-access-htbf4\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.699547 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-run-openvswitch\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.699582 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-host-cni-netd\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.699612 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-env-overrides\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.699647 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-node-log\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.699672 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-log-socket\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.699717 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-run-systemd\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.699747 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-etc-openvswitch\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.699798 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.699835 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-ovn-node-metrics-cert\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.699887 4811 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.699901 4811 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.700002 4811 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.700143 4811 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.700163 4811 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-slash\") on node \"crc\" DevicePath \"\"" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.700176 4811 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.700210 4811 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.700225 4811 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.700237 4811 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.700249 4811 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/554fedad-397a-4827-b5ad-48a681a7a2e5-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.700262 4811 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.700294 4811 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.700305 4811 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-node-log\") on node \"crc\" DevicePath \"\"" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.700315 4811 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-log-socket\") on node \"crc\" DevicePath \"\"" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.700326 4811 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/554fedad-397a-4827-b5ad-48a681a7a2e5-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.700339 4811 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.700374 4811 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/554fedad-397a-4827-b5ad-48a681a7a2e5-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.706094 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/554fedad-397a-4827-b5ad-48a681a7a2e5-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "554fedad-397a-4827-b5ad-48a681a7a2e5" (UID: "554fedad-397a-4827-b5ad-48a681a7a2e5"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.706253 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/554fedad-397a-4827-b5ad-48a681a7a2e5-kube-api-access-mkbcd" (OuterVolumeSpecName: "kube-api-access-mkbcd") pod "554fedad-397a-4827-b5ad-48a681a7a2e5" (UID: "554fedad-397a-4827-b5ad-48a681a7a2e5"). InnerVolumeSpecName "kube-api-access-mkbcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.714800 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "554fedad-397a-4827-b5ad-48a681a7a2e5" (UID: "554fedad-397a-4827-b5ad-48a681a7a2e5"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.801388 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-ovn-node-metrics-cert\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.801479 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-host-kubelet\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.801498 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-host-cni-bin\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.801534 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-host-run-ovn-kubernetes\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.801575 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-ovnkube-script-lib\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.801610 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-systemd-units\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.801613 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-host-cni-bin\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.801633 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-var-lib-openvswitch\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.801659 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-ovnkube-config\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.801681 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-run-ovn\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.801701 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-host-slash\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.801714 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-var-lib-openvswitch\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.801720 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-host-run-netns\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.801743 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-host-run-netns\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.801682 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-systemd-units\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.801749 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-host-run-ovn-kubernetes\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.801613 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-host-kubelet\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.801798 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-run-ovn\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.801857 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-host-slash\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.801897 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htbf4\" (UniqueName: \"kubernetes.io/projected/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-kube-api-access-htbf4\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.801938 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-run-openvswitch\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.801978 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-env-overrides\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.802011 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-host-cni-netd\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.802037 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-run-openvswitch\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.802069 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-node-log\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.802111 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-log-socket\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.802167 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-etc-openvswitch\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.802206 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-run-systemd\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.802245 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.802320 4811 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/554fedad-397a-4827-b5ad-48a681a7a2e5-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.802341 4811 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/554fedad-397a-4827-b5ad-48a681a7a2e5-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.802342 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-log-socket\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.802359 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkbcd\" (UniqueName: \"kubernetes.io/projected/554fedad-397a-4827-b5ad-48a681a7a2e5-kube-api-access-mkbcd\") on node \"crc\" DevicePath \"\"" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.802385 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-etc-openvswitch\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.802394 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-run-systemd\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.802405 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.802419 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-host-cni-netd\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.802475 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-node-log\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.802865 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-ovnkube-script-lib\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.802969 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-env-overrides\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.803039 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-ovnkube-config\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.805648 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-ovn-node-metrics-cert\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.821832 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htbf4\" (UniqueName: \"kubernetes.io/projected/de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce-kube-api-access-htbf4\") pod \"ovnkube-node-fv7h8\" (UID: \"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce\") " pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: I0219 10:20:28.955255 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:28 crc kubenswrapper[4811]: W0219 10:20:28.980176 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde2bbb29_7d2a_4eb5_a74e_1185e14dc3ce.slice/crio-4ec0b967b3307a8707d6ca784983da4efd2726d95b36857e6b15ee47ebcc2b9d WatchSource:0}: Error finding container 4ec0b967b3307a8707d6ca784983da4efd2726d95b36857e6b15ee47ebcc2b9d: Status 404 returned error can't find the container with id 4ec0b967b3307a8707d6ca784983da4efd2726d95b36857e6b15ee47ebcc2b9d Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.378391 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ln785_554fedad-397a-4827-b5ad-48a681a7a2e5/ovnkube-controller/3.log" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.388385 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ln785_554fedad-397a-4827-b5ad-48a681a7a2e5/ovn-acl-logging/0.log" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.389327 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ln785_554fedad-397a-4827-b5ad-48a681a7a2e5/ovn-controller/0.log" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.389931 4811 generic.go:334] "Generic (PLEG): container finished" podID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerID="3fbbf3ae488626aae81f663ff9cfe730b383c8e1b2557692987f449a9b9dbbd9" exitCode=0 Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.389963 4811 generic.go:334] "Generic (PLEG): container finished" podID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerID="1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b" exitCode=0 Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.389973 4811 generic.go:334] "Generic (PLEG): container finished" podID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerID="36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45" exitCode=0 Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.389982 4811 generic.go:334] "Generic (PLEG): container finished" podID="554fedad-397a-4827-b5ad-48a681a7a2e5" containerID="f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb" exitCode=0 Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.390101 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" event={"ID":"554fedad-397a-4827-b5ad-48a681a7a2e5","Type":"ContainerDied","Data":"3fbbf3ae488626aae81f663ff9cfe730b383c8e1b2557692987f449a9b9dbbd9"} Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.390157 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" event={"ID":"554fedad-397a-4827-b5ad-48a681a7a2e5","Type":"ContainerDied","Data":"1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b"} Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.390174 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" event={"ID":"554fedad-397a-4827-b5ad-48a681a7a2e5","Type":"ContainerDied","Data":"36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45"} Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.390190 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" event={"ID":"554fedad-397a-4827-b5ad-48a681a7a2e5","Type":"ContainerDied","Data":"f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb"} Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.390210 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" event={"ID":"554fedad-397a-4827-b5ad-48a681a7a2e5","Type":"ContainerDied","Data":"f27ea24930a1682da92e0fc4a861414130084d8eaa1a23cf28f9fb0da0af8e06"} Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.390236 4811 scope.go:117] "RemoveContainer" containerID="3fbbf3ae488626aae81f663ff9cfe730b383c8e1b2557692987f449a9b9dbbd9" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.390560 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ln785" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.396373 4811 generic.go:334] "Generic (PLEG): container finished" podID="de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce" containerID="ed621f3b19ced17ea018103fffdea8b086a883a905faf46090682eb097ae38bf" exitCode=0 Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.396528 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" event={"ID":"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce","Type":"ContainerDied","Data":"ed621f3b19ced17ea018103fffdea8b086a883a905faf46090682eb097ae38bf"} Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.396592 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" event={"ID":"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce","Type":"ContainerStarted","Data":"4ec0b967b3307a8707d6ca784983da4efd2726d95b36857e6b15ee47ebcc2b9d"} Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.401418 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lxbrg_9766aa7e-a66b-4efc-a5de-2fe49ef00eb8/kube-multus/2.log" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.416747 4811 scope.go:117] "RemoveContainer" containerID="89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.450847 4811 scope.go:117] "RemoveContainer" containerID="1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.467014 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ln785"] Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.473569 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ln785"] Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.492289 4811 scope.go:117] "RemoveContainer" containerID="36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.525858 4811 scope.go:117] "RemoveContainer" containerID="f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.555048 4811 scope.go:117] "RemoveContainer" containerID="85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.571996 4811 scope.go:117] "RemoveContainer" containerID="6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.586727 4811 scope.go:117] "RemoveContainer" containerID="67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.603829 4811 scope.go:117] "RemoveContainer" containerID="99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.620175 4811 scope.go:117] "RemoveContainer" containerID="44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.639319 4811 scope.go:117] "RemoveContainer" containerID="3fbbf3ae488626aae81f663ff9cfe730b383c8e1b2557692987f449a9b9dbbd9" Feb 19 10:20:29 crc kubenswrapper[4811]: E0219 10:20:29.642050 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fbbf3ae488626aae81f663ff9cfe730b383c8e1b2557692987f449a9b9dbbd9\": container with ID starting with 3fbbf3ae488626aae81f663ff9cfe730b383c8e1b2557692987f449a9b9dbbd9 not found: ID does not exist" containerID="3fbbf3ae488626aae81f663ff9cfe730b383c8e1b2557692987f449a9b9dbbd9" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.642095 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fbbf3ae488626aae81f663ff9cfe730b383c8e1b2557692987f449a9b9dbbd9"} err="failed to get container status \"3fbbf3ae488626aae81f663ff9cfe730b383c8e1b2557692987f449a9b9dbbd9\": rpc error: code = NotFound desc = could not find container \"3fbbf3ae488626aae81f663ff9cfe730b383c8e1b2557692987f449a9b9dbbd9\": container with ID starting with 3fbbf3ae488626aae81f663ff9cfe730b383c8e1b2557692987f449a9b9dbbd9 not found: ID does not exist" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.642131 4811 scope.go:117] "RemoveContainer" containerID="89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910" Feb 19 10:20:29 crc kubenswrapper[4811]: E0219 10:20:29.642609 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910\": container with ID starting with 89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910 not found: ID does not exist" containerID="89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.642644 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910"} err="failed to get container status \"89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910\": rpc error: code = NotFound desc = could not find container \"89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910\": container with ID starting with 89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910 not found: ID does not exist" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.642670 4811 scope.go:117] "RemoveContainer" containerID="1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b" Feb 19 10:20:29 crc kubenswrapper[4811]: E0219 10:20:29.643071 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b\": container with ID starting with 1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b not found: ID does not exist" containerID="1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.643101 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b"} err="failed to get container status \"1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b\": rpc error: code = NotFound desc = could not find container \"1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b\": container with ID starting with 1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b not found: ID does not exist" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.643125 4811 scope.go:117] "RemoveContainer" containerID="36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45" Feb 19 10:20:29 crc kubenswrapper[4811]: E0219 10:20:29.643378 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45\": container with ID starting with 36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45 not found: ID does not exist" containerID="36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.643400 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45"} err="failed to get container status \"36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45\": rpc error: code = NotFound desc = could not find container \"36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45\": container with ID starting with 36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45 not found: ID does not exist" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.643418 4811 scope.go:117] "RemoveContainer" containerID="f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb" Feb 19 10:20:29 crc kubenswrapper[4811]: E0219 10:20:29.643775 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb\": container with ID starting with f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb not found: ID does not exist" containerID="f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.643805 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb"} err="failed to get container status \"f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb\": rpc error: code = NotFound desc = could not find container \"f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb\": container with ID starting with f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb not found: ID does not exist" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.643826 4811 scope.go:117] "RemoveContainer" containerID="85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c" Feb 19 10:20:29 crc kubenswrapper[4811]: E0219 10:20:29.644204 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c\": container with ID starting with 85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c not found: ID does not exist" containerID="85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.644269 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c"} err="failed to get container status \"85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c\": rpc error: code = NotFound desc = could not find container \"85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c\": container with ID starting with 85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c not found: ID does not exist" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.644318 4811 scope.go:117] "RemoveContainer" containerID="6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4" Feb 19 10:20:29 crc kubenswrapper[4811]: E0219 10:20:29.644767 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4\": container with ID starting with 6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4 not found: ID does not exist" containerID="6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.644811 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4"} err="failed to get container status \"6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4\": rpc error: code = NotFound desc = could not find container \"6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4\": container with ID starting with 6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4 not found: ID does not exist" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.644834 4811 scope.go:117] "RemoveContainer" containerID="67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e" Feb 19 10:20:29 crc kubenswrapper[4811]: E0219 10:20:29.645237 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e\": container with ID starting with 67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e not found: ID does not exist" containerID="67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.645279 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e"} err="failed to get container status \"67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e\": rpc error: code = NotFound desc = could not find container \"67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e\": container with ID starting with 67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e not found: ID does not exist" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.645304 4811 scope.go:117] "RemoveContainer" containerID="99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860" Feb 19 10:20:29 crc kubenswrapper[4811]: E0219 10:20:29.645715 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860\": container with ID starting with 99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860 not found: ID does not exist" containerID="99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.645751 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860"} err="failed to get container status \"99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860\": rpc error: code = NotFound desc = could not find container \"99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860\": container with ID starting with 99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860 not found: ID does not exist" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.645771 4811 scope.go:117] "RemoveContainer" containerID="44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019" Feb 19 10:20:29 crc kubenswrapper[4811]: E0219 10:20:29.646063 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\": container with ID starting with 44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019 not found: ID does not exist" containerID="44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.646109 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019"} err="failed to get container status \"44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\": rpc error: code = NotFound desc = could not find container \"44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\": container with ID starting with 44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019 not found: ID does not exist" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.646133 4811 scope.go:117] "RemoveContainer" containerID="3fbbf3ae488626aae81f663ff9cfe730b383c8e1b2557692987f449a9b9dbbd9" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.646599 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fbbf3ae488626aae81f663ff9cfe730b383c8e1b2557692987f449a9b9dbbd9"} err="failed to get container status \"3fbbf3ae488626aae81f663ff9cfe730b383c8e1b2557692987f449a9b9dbbd9\": rpc error: code = NotFound desc = could not find container \"3fbbf3ae488626aae81f663ff9cfe730b383c8e1b2557692987f449a9b9dbbd9\": container with ID starting with 3fbbf3ae488626aae81f663ff9cfe730b383c8e1b2557692987f449a9b9dbbd9 not found: ID does not exist" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.646631 4811 scope.go:117] "RemoveContainer" containerID="89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.647054 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910"} err="failed to get container status \"89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910\": rpc error: code = NotFound desc = could not find container \"89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910\": container with ID starting with 89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910 not found: ID does not exist" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.647081 4811 scope.go:117] "RemoveContainer" containerID="1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.647408 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b"} err="failed to get container status \"1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b\": rpc error: code = NotFound desc = could not find container \"1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b\": container with ID starting with 1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b not found: ID does not exist" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.647430 4811 scope.go:117] "RemoveContainer" containerID="36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.647845 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45"} err="failed to get container status \"36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45\": rpc error: code = NotFound desc = could not find container \"36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45\": container with ID starting with 36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45 not found: ID does not exist" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.647869 4811 scope.go:117] "RemoveContainer" containerID="f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.648171 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb"} err="failed to get container status \"f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb\": rpc error: code = NotFound desc = could not find container \"f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb\": container with ID starting with f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb not found: ID does not exist" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.648188 4811 scope.go:117] "RemoveContainer" containerID="85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.648649 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c"} err="failed to get container status \"85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c\": rpc error: code = NotFound desc = could not find container \"85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c\": container with ID starting with 85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c not found: ID does not exist" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.648714 4811 scope.go:117] "RemoveContainer" containerID="6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.649055 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4"} err="failed to get container status \"6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4\": rpc error: code = NotFound desc = could not find container \"6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4\": container with ID starting with 6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4 not found: ID does not exist" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.649076 4811 scope.go:117] "RemoveContainer" containerID="67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.649399 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e"} err="failed to get container status \"67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e\": rpc error: code = NotFound desc = could not find container \"67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e\": container with ID starting with 67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e not found: ID does not exist" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.649428 4811 scope.go:117] "RemoveContainer" containerID="99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.649776 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860"} err="failed to get container status \"99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860\": rpc error: code = NotFound desc = could not find container \"99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860\": container with ID starting with 99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860 not found: ID does not exist" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.649808 4811 scope.go:117] "RemoveContainer" containerID="44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.650074 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019"} err="failed to get container status \"44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\": rpc error: code = NotFound desc = could not find container \"44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\": container with ID starting with 44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019 not found: ID does not exist" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.650091 4811 scope.go:117] "RemoveContainer" containerID="3fbbf3ae488626aae81f663ff9cfe730b383c8e1b2557692987f449a9b9dbbd9" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.650417 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fbbf3ae488626aae81f663ff9cfe730b383c8e1b2557692987f449a9b9dbbd9"} err="failed to get container status \"3fbbf3ae488626aae81f663ff9cfe730b383c8e1b2557692987f449a9b9dbbd9\": rpc error: code = NotFound desc = could not find container \"3fbbf3ae488626aae81f663ff9cfe730b383c8e1b2557692987f449a9b9dbbd9\": container with ID starting with 3fbbf3ae488626aae81f663ff9cfe730b383c8e1b2557692987f449a9b9dbbd9 not found: ID does not exist" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.650444 4811 scope.go:117] "RemoveContainer" containerID="89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.650906 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910"} err="failed to get container status \"89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910\": rpc error: code = NotFound desc = could not find container \"89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910\": container with ID starting with 89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910 not found: ID does not exist" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.650928 4811 scope.go:117] "RemoveContainer" containerID="1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.651228 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b"} err="failed to get container status \"1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b\": rpc error: code = NotFound desc = could not find container \"1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b\": container with ID starting with 1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b not found: ID does not exist" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.651253 4811 scope.go:117] "RemoveContainer" containerID="36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.651511 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45"} err="failed to get container status \"36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45\": rpc error: code = NotFound desc = could not find container \"36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45\": container with ID starting with 36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45 not found: ID does not exist" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.651530 4811 scope.go:117] "RemoveContainer" containerID="f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.651815 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb"} err="failed to get container status \"f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb\": rpc error: code = NotFound desc = could not find container \"f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb\": container with ID starting with f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb not found: ID does not exist" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.651844 4811 scope.go:117] "RemoveContainer" containerID="85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.652086 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c"} err="failed to get container status \"85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c\": rpc error: code = NotFound desc = could not find container \"85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c\": container with ID starting with 85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c not found: ID does not exist" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.652108 4811 scope.go:117] "RemoveContainer" containerID="6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.652413 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4"} err="failed to get container status \"6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4\": rpc error: code = NotFound desc = could not find container \"6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4\": container with ID starting with 6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4 not found: ID does not exist" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.652473 4811 scope.go:117] "RemoveContainer" containerID="67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.652792 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e"} err="failed to get container status \"67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e\": rpc error: code = NotFound desc = could not find container \"67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e\": container with ID starting with 67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e not found: ID does not exist" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.652820 4811 scope.go:117] "RemoveContainer" containerID="99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.653087 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860"} err="failed to get container status \"99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860\": rpc error: code = NotFound desc = could not find container \"99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860\": container with ID starting with 99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860 not found: ID does not exist" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.653123 4811 scope.go:117] "RemoveContainer" containerID="44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.653412 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019"} err="failed to get container status \"44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\": rpc error: code = NotFound desc = could not find container \"44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\": container with ID starting with 44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019 not found: ID does not exist" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.653443 4811 scope.go:117] "RemoveContainer" containerID="3fbbf3ae488626aae81f663ff9cfe730b383c8e1b2557692987f449a9b9dbbd9" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.654349 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fbbf3ae488626aae81f663ff9cfe730b383c8e1b2557692987f449a9b9dbbd9"} err="failed to get container status \"3fbbf3ae488626aae81f663ff9cfe730b383c8e1b2557692987f449a9b9dbbd9\": rpc error: code = NotFound desc = could not find container \"3fbbf3ae488626aae81f663ff9cfe730b383c8e1b2557692987f449a9b9dbbd9\": container with ID starting with 3fbbf3ae488626aae81f663ff9cfe730b383c8e1b2557692987f449a9b9dbbd9 not found: ID does not exist" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.654381 4811 scope.go:117] "RemoveContainer" containerID="89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.654689 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910"} err="failed to get container status \"89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910\": rpc error: code = NotFound desc = could not find container \"89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910\": container with ID starting with 89023a0ca1c80455d76531d934f387d8a57108e61f13552b8b5e5f96f7d53910 not found: ID does not exist" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.654723 4811 scope.go:117] "RemoveContainer" containerID="1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.655017 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b"} err="failed to get container status \"1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b\": rpc error: code = NotFound desc = could not find container \"1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b\": container with ID starting with 1f4db4563e5a9f0c04c3143e502e078a5878f569c95aa3a8b16028db9446234b not found: ID does not exist" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.655048 4811 scope.go:117] "RemoveContainer" containerID="36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.655300 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45"} err="failed to get container status \"36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45\": rpc error: code = NotFound desc = could not find container \"36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45\": container with ID starting with 36e281f7c0ca6812043ed28d993e484539b438778883451ee93b5d8015f64e45 not found: ID does not exist" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.655327 4811 scope.go:117] "RemoveContainer" containerID="f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.655656 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb"} err="failed to get container status \"f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb\": rpc error: code = NotFound desc = could not find container \"f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb\": container with ID starting with f01ed612066ebf5bd93299089c0e144784b899701581b0518a07e4dc14c469fb not found: ID does not exist" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.655685 4811 scope.go:117] "RemoveContainer" containerID="85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.656001 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c"} err="failed to get container status \"85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c\": rpc error: code = NotFound desc = could not find container \"85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c\": container with ID starting with 85b7ebd886cb805b76fded50bd8d710d8a96ee631571495d014ddbe76b38ab7c not found: ID does not exist" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.656039 4811 scope.go:117] "RemoveContainer" containerID="6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.656353 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4"} err="failed to get container status \"6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4\": rpc error: code = NotFound desc = could not find container \"6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4\": container with ID starting with 6ac8730235023dc861cd87b9ec1df56ae68b028ad4867342fb4699fba2de54a4 not found: ID does not exist" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.656380 4811 scope.go:117] "RemoveContainer" containerID="67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.656735 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e"} err="failed to get container status \"67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e\": rpc error: code = NotFound desc = could not find container \"67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e\": container with ID starting with 67837e760eb9b677820b9176371f792d47ef14ee17cac271f9ce840fbda6d55e not found: ID does not exist" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.656763 4811 scope.go:117] "RemoveContainer" containerID="99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.657101 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860"} err="failed to get container status \"99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860\": rpc error: code = NotFound desc = could not find container \"99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860\": container with ID starting with 99a6771c43d4b6376268617961b23b30d7e297a543d79503d3a9d1a205ab5860 not found: ID does not exist" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.657134 4811 scope.go:117] "RemoveContainer" containerID="44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019" Feb 19 10:20:29 crc kubenswrapper[4811]: I0219 10:20:29.657405 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019"} err="failed to get container status \"44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\": rpc error: code = NotFound desc = could not find container \"44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019\": container with ID starting with 44f7cd5bffc76744f21e905f192e01cca61f93e28fc3f119a7508b6fdaddf019 not found: ID does not exist" Feb 19 10:20:30 crc kubenswrapper[4811]: I0219 10:20:30.412906 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" event={"ID":"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce","Type":"ContainerStarted","Data":"db87db0c08ce02762d3d1f5d591a3d82be9d99a5c13580565dbb650923f8bc6d"} Feb 19 10:20:30 crc kubenswrapper[4811]: I0219 10:20:30.592510 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="554fedad-397a-4827-b5ad-48a681a7a2e5" path="/var/lib/kubelet/pods/554fedad-397a-4827-b5ad-48a681a7a2e5/volumes" Feb 19 10:20:31 crc kubenswrapper[4811]: I0219 10:20:31.426657 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" event={"ID":"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce","Type":"ContainerStarted","Data":"235ccc2de203f76572d5e8205660a32b2779d2e6b4d1fbdeaf81a8f949c0526b"} Feb 19 10:20:31 crc kubenswrapper[4811]: I0219 10:20:31.427414 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" event={"ID":"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce","Type":"ContainerStarted","Data":"f05fc17de3eecd38e6bc1e77746e43b51796eb1982fa1684ba4931fab8d59e63"} Feb 19 10:20:31 crc kubenswrapper[4811]: I0219 10:20:31.427560 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" event={"ID":"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce","Type":"ContainerStarted","Data":"9eebb3ed51d11f903a53c77ac997718e86900a789b5c2a744d53221f7b22a22c"} Feb 19 10:20:31 crc kubenswrapper[4811]: I0219 10:20:31.427689 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" event={"ID":"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce","Type":"ContainerStarted","Data":"776bef491e63562cc4edefdc0a12aa8cb913a431bf6d86a1cb80bc10131c505e"} Feb 19 10:20:31 crc kubenswrapper[4811]: I0219 10:20:31.427774 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" event={"ID":"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce","Type":"ContainerStarted","Data":"c410ed2ccd69155b656b267d4bba83b1f5be296a08d5653f623efd8f269b2333"} Feb 19 10:20:33 crc kubenswrapper[4811]: I0219 10:20:33.442902 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" event={"ID":"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce","Type":"ContainerStarted","Data":"d011d5ecfb2f90ea3d94e95ee767224c2da607ec5814de26842ab7218cdc5860"} Feb 19 10:20:36 crc kubenswrapper[4811]: I0219 10:20:36.464310 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" event={"ID":"de2bbb29-7d2a-4eb5-a74e-1185e14dc3ce","Type":"ContainerStarted","Data":"f7a8ff78ea4ebe29775c6fd9e6a7de051bf9785bba8c9b6332a3e124536cd3cb"} Feb 19 10:20:36 crc kubenswrapper[4811]: I0219 10:20:36.464890 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:36 crc kubenswrapper[4811]: I0219 10:20:36.498602 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" podStartSLOduration=8.498580415 podStartE2EDuration="8.498580415s" podCreationTimestamp="2026-02-19 10:20:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:20:36.495126275 +0000 UTC m=+725.021590961" watchObservedRunningTime="2026-02-19 10:20:36.498580415 +0000 UTC m=+725.025045101" Feb 19 10:20:36 crc kubenswrapper[4811]: I0219 10:20:36.502746 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:37 crc kubenswrapper[4811]: I0219 10:20:37.471956 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:37 crc kubenswrapper[4811]: I0219 10:20:37.472573 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:37 crc kubenswrapper[4811]: I0219 10:20:37.507378 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:43 crc kubenswrapper[4811]: I0219 10:20:43.583552 4811 scope.go:117] "RemoveContainer" containerID="8a4885b0091a928d075b1754c06a54a38c4cec164e586a2e7cb86b25fc90f049" Feb 19 10:20:43 crc kubenswrapper[4811]: E0219 10:20:43.584738 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-lxbrg_openshift-multus(9766aa7e-a66b-4efc-a5de-2fe49ef00eb8)\"" pod="openshift-multus/multus-lxbrg" podUID="9766aa7e-a66b-4efc-a5de-2fe49ef00eb8" Feb 19 10:20:51 crc kubenswrapper[4811]: I0219 10:20:51.340054 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7"] Feb 19 10:20:51 crc kubenswrapper[4811]: I0219 10:20:51.341402 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7" Feb 19 10:20:51 crc kubenswrapper[4811]: I0219 10:20:51.343201 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 10:20:51 crc kubenswrapper[4811]: I0219 10:20:51.350292 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7"] Feb 19 10:20:51 crc kubenswrapper[4811]: I0219 10:20:51.522051 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee3ec824-bdc8-4428-8c43-82503b8f2760-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7\" (UID: \"ee3ec824-bdc8-4428-8c43-82503b8f2760\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7" Feb 19 10:20:51 crc kubenswrapper[4811]: I0219 10:20:51.522603 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee3ec824-bdc8-4428-8c43-82503b8f2760-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7\" (UID: \"ee3ec824-bdc8-4428-8c43-82503b8f2760\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7" Feb 19 10:20:51 crc kubenswrapper[4811]: I0219 10:20:51.522886 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prrpq\" (UniqueName: \"kubernetes.io/projected/ee3ec824-bdc8-4428-8c43-82503b8f2760-kube-api-access-prrpq\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7\" (UID: \"ee3ec824-bdc8-4428-8c43-82503b8f2760\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7" Feb 19 10:20:51 crc kubenswrapper[4811]: I0219 10:20:51.624392 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee3ec824-bdc8-4428-8c43-82503b8f2760-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7\" (UID: \"ee3ec824-bdc8-4428-8c43-82503b8f2760\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7" Feb 19 10:20:51 crc kubenswrapper[4811]: I0219 10:20:51.625003 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee3ec824-bdc8-4428-8c43-82503b8f2760-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7\" (UID: \"ee3ec824-bdc8-4428-8c43-82503b8f2760\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7" Feb 19 10:20:51 crc kubenswrapper[4811]: I0219 10:20:51.625204 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prrpq\" (UniqueName: \"kubernetes.io/projected/ee3ec824-bdc8-4428-8c43-82503b8f2760-kube-api-access-prrpq\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7\" (UID: \"ee3ec824-bdc8-4428-8c43-82503b8f2760\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7" Feb 19 10:20:51 crc kubenswrapper[4811]: I0219 10:20:51.625384 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee3ec824-bdc8-4428-8c43-82503b8f2760-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7\" (UID: \"ee3ec824-bdc8-4428-8c43-82503b8f2760\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7" Feb 19 10:20:51 crc kubenswrapper[4811]: I0219 10:20:51.625667 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee3ec824-bdc8-4428-8c43-82503b8f2760-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7\" (UID: \"ee3ec824-bdc8-4428-8c43-82503b8f2760\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7" Feb 19 10:20:51 crc kubenswrapper[4811]: I0219 10:20:51.644170 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prrpq\" (UniqueName: \"kubernetes.io/projected/ee3ec824-bdc8-4428-8c43-82503b8f2760-kube-api-access-prrpq\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7\" (UID: \"ee3ec824-bdc8-4428-8c43-82503b8f2760\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7" Feb 19 10:20:51 crc kubenswrapper[4811]: I0219 10:20:51.656057 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7" Feb 19 10:20:51 crc kubenswrapper[4811]: E0219 10:20:51.685235 4811 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7_openshift-marketplace_ee3ec824-bdc8-4428-8c43-82503b8f2760_0(abaa17e673b646c86721b03dea9dd512661a09e5c24570f0b812945321e27ef4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 10:20:51 crc kubenswrapper[4811]: E0219 10:20:51.685412 4811 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7_openshift-marketplace_ee3ec824-bdc8-4428-8c43-82503b8f2760_0(abaa17e673b646c86721b03dea9dd512661a09e5c24570f0b812945321e27ef4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7" Feb 19 10:20:51 crc kubenswrapper[4811]: E0219 10:20:51.685488 4811 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7_openshift-marketplace_ee3ec824-bdc8-4428-8c43-82503b8f2760_0(abaa17e673b646c86721b03dea9dd512661a09e5c24570f0b812945321e27ef4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7" Feb 19 10:20:51 crc kubenswrapper[4811]: E0219 10:20:51.685594 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7_openshift-marketplace(ee3ec824-bdc8-4428-8c43-82503b8f2760)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7_openshift-marketplace(ee3ec824-bdc8-4428-8c43-82503b8f2760)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7_openshift-marketplace_ee3ec824-bdc8-4428-8c43-82503b8f2760_0(abaa17e673b646c86721b03dea9dd512661a09e5c24570f0b812945321e27ef4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7" podUID="ee3ec824-bdc8-4428-8c43-82503b8f2760" Feb 19 10:20:52 crc kubenswrapper[4811]: I0219 10:20:52.574755 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7" Feb 19 10:20:52 crc kubenswrapper[4811]: I0219 10:20:52.575386 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7" Feb 19 10:20:52 crc kubenswrapper[4811]: E0219 10:20:52.617993 4811 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7_openshift-marketplace_ee3ec824-bdc8-4428-8c43-82503b8f2760_0(1ceff8b9813df21622b382db21d9678ca5c5d4884f360fbc2ef55e8dfbfbc755): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 10:20:52 crc kubenswrapper[4811]: E0219 10:20:52.618128 4811 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7_openshift-marketplace_ee3ec824-bdc8-4428-8c43-82503b8f2760_0(1ceff8b9813df21622b382db21d9678ca5c5d4884f360fbc2ef55e8dfbfbc755): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7" Feb 19 10:20:52 crc kubenswrapper[4811]: E0219 10:20:52.618171 4811 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7_openshift-marketplace_ee3ec824-bdc8-4428-8c43-82503b8f2760_0(1ceff8b9813df21622b382db21d9678ca5c5d4884f360fbc2ef55e8dfbfbc755): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7" Feb 19 10:20:52 crc kubenswrapper[4811]: E0219 10:20:52.618251 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7_openshift-marketplace(ee3ec824-bdc8-4428-8c43-82503b8f2760)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7_openshift-marketplace(ee3ec824-bdc8-4428-8c43-82503b8f2760)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7_openshift-marketplace_ee3ec824-bdc8-4428-8c43-82503b8f2760_0(1ceff8b9813df21622b382db21d9678ca5c5d4884f360fbc2ef55e8dfbfbc755): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7" podUID="ee3ec824-bdc8-4428-8c43-82503b8f2760" Feb 19 10:20:58 crc kubenswrapper[4811]: I0219 10:20:58.583915 4811 scope.go:117] "RemoveContainer" containerID="8a4885b0091a928d075b1754c06a54a38c4cec164e586a2e7cb86b25fc90f049" Feb 19 10:20:58 crc kubenswrapper[4811]: I0219 10:20:58.981165 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fv7h8" Feb 19 10:20:59 crc kubenswrapper[4811]: I0219 10:20:59.629528 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lxbrg_9766aa7e-a66b-4efc-a5de-2fe49ef00eb8/kube-multus/2.log" Feb 19 10:20:59 crc kubenswrapper[4811]: I0219 10:20:59.629633 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lxbrg" event={"ID":"9766aa7e-a66b-4efc-a5de-2fe49ef00eb8","Type":"ContainerStarted","Data":"02dae0f90c5b7fb6a5693112a768f6fc997225fa535538ddb8aa1af074496305"} Feb 19 10:21:07 crc kubenswrapper[4811]: I0219 10:21:07.582992 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7" Feb 19 10:21:07 crc kubenswrapper[4811]: I0219 10:21:07.584210 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7" Feb 19 10:21:07 crc kubenswrapper[4811]: I0219 10:21:07.807708 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7"] Feb 19 10:21:08 crc kubenswrapper[4811]: I0219 10:21:08.689645 4811 generic.go:334] "Generic (PLEG): container finished" podID="ee3ec824-bdc8-4428-8c43-82503b8f2760" containerID="6cec6bbbce2c877eefc921810eeb8ac476102a8ae746dce372867ee0df4c03f6" exitCode=0 Feb 19 10:21:08 crc kubenswrapper[4811]: I0219 10:21:08.689711 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7" event={"ID":"ee3ec824-bdc8-4428-8c43-82503b8f2760","Type":"ContainerDied","Data":"6cec6bbbce2c877eefc921810eeb8ac476102a8ae746dce372867ee0df4c03f6"} Feb 19 10:21:08 crc kubenswrapper[4811]: I0219 10:21:08.690028 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7" event={"ID":"ee3ec824-bdc8-4428-8c43-82503b8f2760","Type":"ContainerStarted","Data":"522e3360cd664c828754b898d37a1e2e00dbd62083ea8abd3e020c342fef2b5c"} Feb 19 10:21:08 crc kubenswrapper[4811]: I0219 10:21:08.934557 4811 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 10:21:10 crc kubenswrapper[4811]: I0219 10:21:10.706101 4811 generic.go:334] "Generic (PLEG): container finished" podID="ee3ec824-bdc8-4428-8c43-82503b8f2760" containerID="4d440102fab88de87c5ce7376076c04d7f33d23b684bd591fef5e2923d760c3b" exitCode=0 Feb 19 10:21:10 crc kubenswrapper[4811]: I0219 10:21:10.706149 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7" event={"ID":"ee3ec824-bdc8-4428-8c43-82503b8f2760","Type":"ContainerDied","Data":"4d440102fab88de87c5ce7376076c04d7f33d23b684bd591fef5e2923d760c3b"} Feb 19 10:21:10 crc kubenswrapper[4811]: I0219 10:21:10.999875 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wwf2l"] Feb 19 10:21:11 crc kubenswrapper[4811]: I0219 10:21:11.002220 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wwf2l" Feb 19 10:21:11 crc kubenswrapper[4811]: I0219 10:21:11.005610 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wwf2l"] Feb 19 10:21:11 crc kubenswrapper[4811]: I0219 10:21:11.006649 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c796d39b-1173-4c3f-9350-672fbbfcab48-catalog-content\") pod \"redhat-operators-wwf2l\" (UID: \"c796d39b-1173-4c3f-9350-672fbbfcab48\") " pod="openshift-marketplace/redhat-operators-wwf2l" Feb 19 10:21:11 crc kubenswrapper[4811]: I0219 10:21:11.006787 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c796d39b-1173-4c3f-9350-672fbbfcab48-utilities\") pod \"redhat-operators-wwf2l\" (UID: \"c796d39b-1173-4c3f-9350-672fbbfcab48\") " pod="openshift-marketplace/redhat-operators-wwf2l" Feb 19 10:21:11 crc kubenswrapper[4811]: I0219 10:21:11.006934 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx7rd\" (UniqueName: \"kubernetes.io/projected/c796d39b-1173-4c3f-9350-672fbbfcab48-kube-api-access-vx7rd\") pod \"redhat-operators-wwf2l\" (UID: \"c796d39b-1173-4c3f-9350-672fbbfcab48\") " pod="openshift-marketplace/redhat-operators-wwf2l" Feb 19 10:21:11 crc kubenswrapper[4811]: I0219 10:21:11.107242 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx7rd\" (UniqueName: \"kubernetes.io/projected/c796d39b-1173-4c3f-9350-672fbbfcab48-kube-api-access-vx7rd\") pod \"redhat-operators-wwf2l\" (UID: \"c796d39b-1173-4c3f-9350-672fbbfcab48\") " pod="openshift-marketplace/redhat-operators-wwf2l" Feb 19 10:21:11 crc kubenswrapper[4811]: I0219 10:21:11.107300 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c796d39b-1173-4c3f-9350-672fbbfcab48-catalog-content\") pod \"redhat-operators-wwf2l\" (UID: \"c796d39b-1173-4c3f-9350-672fbbfcab48\") " pod="openshift-marketplace/redhat-operators-wwf2l" Feb 19 10:21:11 crc kubenswrapper[4811]: I0219 10:21:11.107319 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c796d39b-1173-4c3f-9350-672fbbfcab48-utilities\") pod \"redhat-operators-wwf2l\" (UID: \"c796d39b-1173-4c3f-9350-672fbbfcab48\") " pod="openshift-marketplace/redhat-operators-wwf2l" Feb 19 10:21:11 crc kubenswrapper[4811]: I0219 10:21:11.107754 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c796d39b-1173-4c3f-9350-672fbbfcab48-utilities\") pod \"redhat-operators-wwf2l\" (UID: \"c796d39b-1173-4c3f-9350-672fbbfcab48\") " pod="openshift-marketplace/redhat-operators-wwf2l" Feb 19 10:21:11 crc kubenswrapper[4811]: I0219 10:21:11.107976 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c796d39b-1173-4c3f-9350-672fbbfcab48-catalog-content\") pod \"redhat-operators-wwf2l\" (UID: \"c796d39b-1173-4c3f-9350-672fbbfcab48\") " pod="openshift-marketplace/redhat-operators-wwf2l" Feb 19 10:21:11 crc kubenswrapper[4811]: I0219 10:21:11.134494 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx7rd\" (UniqueName: \"kubernetes.io/projected/c796d39b-1173-4c3f-9350-672fbbfcab48-kube-api-access-vx7rd\") pod \"redhat-operators-wwf2l\" (UID: \"c796d39b-1173-4c3f-9350-672fbbfcab48\") " pod="openshift-marketplace/redhat-operators-wwf2l" Feb 19 10:21:11 crc kubenswrapper[4811]: I0219 10:21:11.334881 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wwf2l" Feb 19 10:21:11 crc kubenswrapper[4811]: I0219 10:21:11.713367 4811 generic.go:334] "Generic (PLEG): container finished" podID="ee3ec824-bdc8-4428-8c43-82503b8f2760" containerID="482fef820864dcb904e010fc75fac278674d6fe12206aab05d0e8e8bc7961a4c" exitCode=0 Feb 19 10:21:11 crc kubenswrapper[4811]: I0219 10:21:11.713408 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7" event={"ID":"ee3ec824-bdc8-4428-8c43-82503b8f2760","Type":"ContainerDied","Data":"482fef820864dcb904e010fc75fac278674d6fe12206aab05d0e8e8bc7961a4c"} Feb 19 10:21:11 crc kubenswrapper[4811]: I0219 10:21:11.798597 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wwf2l"] Feb 19 10:21:12 crc kubenswrapper[4811]: I0219 10:21:12.719188 4811 generic.go:334] "Generic (PLEG): container finished" podID="c796d39b-1173-4c3f-9350-672fbbfcab48" containerID="8aa643c4d1b5970b1c261b918b6b86b0dae0d14ccba0e6e1186cc601dd22f626" exitCode=0 Feb 19 10:21:12 crc kubenswrapper[4811]: I0219 10:21:12.719282 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wwf2l" event={"ID":"c796d39b-1173-4c3f-9350-672fbbfcab48","Type":"ContainerDied","Data":"8aa643c4d1b5970b1c261b918b6b86b0dae0d14ccba0e6e1186cc601dd22f626"} Feb 19 10:21:12 crc kubenswrapper[4811]: I0219 10:21:12.720306 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wwf2l" event={"ID":"c796d39b-1173-4c3f-9350-672fbbfcab48","Type":"ContainerStarted","Data":"b75464b7af277daf90e9c1d5315786a1e865b89faafdd3faa6097944e251e028"} Feb 19 10:21:12 crc kubenswrapper[4811]: I0219 10:21:12.944330 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7" Feb 19 10:21:13 crc kubenswrapper[4811]: I0219 10:21:13.033796 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prrpq\" (UniqueName: \"kubernetes.io/projected/ee3ec824-bdc8-4428-8c43-82503b8f2760-kube-api-access-prrpq\") pod \"ee3ec824-bdc8-4428-8c43-82503b8f2760\" (UID: \"ee3ec824-bdc8-4428-8c43-82503b8f2760\") " Feb 19 10:21:13 crc kubenswrapper[4811]: I0219 10:21:13.033845 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee3ec824-bdc8-4428-8c43-82503b8f2760-bundle\") pod \"ee3ec824-bdc8-4428-8c43-82503b8f2760\" (UID: \"ee3ec824-bdc8-4428-8c43-82503b8f2760\") " Feb 19 10:21:13 crc kubenswrapper[4811]: I0219 10:21:13.033862 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee3ec824-bdc8-4428-8c43-82503b8f2760-util\") pod \"ee3ec824-bdc8-4428-8c43-82503b8f2760\" (UID: \"ee3ec824-bdc8-4428-8c43-82503b8f2760\") " Feb 19 10:21:13 crc kubenswrapper[4811]: I0219 10:21:13.034579 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee3ec824-bdc8-4428-8c43-82503b8f2760-bundle" (OuterVolumeSpecName: "bundle") pod "ee3ec824-bdc8-4428-8c43-82503b8f2760" (UID: "ee3ec824-bdc8-4428-8c43-82503b8f2760"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:21:13 crc kubenswrapper[4811]: I0219 10:21:13.039901 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee3ec824-bdc8-4428-8c43-82503b8f2760-kube-api-access-prrpq" (OuterVolumeSpecName: "kube-api-access-prrpq") pod "ee3ec824-bdc8-4428-8c43-82503b8f2760" (UID: "ee3ec824-bdc8-4428-8c43-82503b8f2760"). InnerVolumeSpecName "kube-api-access-prrpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:21:13 crc kubenswrapper[4811]: I0219 10:21:13.048004 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee3ec824-bdc8-4428-8c43-82503b8f2760-util" (OuterVolumeSpecName: "util") pod "ee3ec824-bdc8-4428-8c43-82503b8f2760" (UID: "ee3ec824-bdc8-4428-8c43-82503b8f2760"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:21:13 crc kubenswrapper[4811]: I0219 10:21:13.135362 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prrpq\" (UniqueName: \"kubernetes.io/projected/ee3ec824-bdc8-4428-8c43-82503b8f2760-kube-api-access-prrpq\") on node \"crc\" DevicePath \"\"" Feb 19 10:21:13 crc kubenswrapper[4811]: I0219 10:21:13.135402 4811 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee3ec824-bdc8-4428-8c43-82503b8f2760-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:21:13 crc kubenswrapper[4811]: I0219 10:21:13.135414 4811 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee3ec824-bdc8-4428-8c43-82503b8f2760-util\") on node \"crc\" DevicePath \"\"" Feb 19 10:21:13 crc kubenswrapper[4811]: I0219 10:21:13.735027 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7" event={"ID":"ee3ec824-bdc8-4428-8c43-82503b8f2760","Type":"ContainerDied","Data":"522e3360cd664c828754b898d37a1e2e00dbd62083ea8abd3e020c342fef2b5c"} Feb 19 10:21:13 crc kubenswrapper[4811]: I0219 10:21:13.735433 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="522e3360cd664c828754b898d37a1e2e00dbd62083ea8abd3e020c342fef2b5c" Feb 19 10:21:13 crc kubenswrapper[4811]: I0219 10:21:13.735048 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7" Feb 19 10:21:13 crc kubenswrapper[4811]: I0219 10:21:13.738740 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wwf2l" event={"ID":"c796d39b-1173-4c3f-9350-672fbbfcab48","Type":"ContainerStarted","Data":"139d13735460a70564a8ca11546211b2a9af979c44754dba3196a640e53fb0ea"} Feb 19 10:21:14 crc kubenswrapper[4811]: I0219 10:21:14.746572 4811 generic.go:334] "Generic (PLEG): container finished" podID="c796d39b-1173-4c3f-9350-672fbbfcab48" containerID="139d13735460a70564a8ca11546211b2a9af979c44754dba3196a640e53fb0ea" exitCode=0 Feb 19 10:21:14 crc kubenswrapper[4811]: I0219 10:21:14.746633 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wwf2l" event={"ID":"c796d39b-1173-4c3f-9350-672fbbfcab48","Type":"ContainerDied","Data":"139d13735460a70564a8ca11546211b2a9af979c44754dba3196a640e53fb0ea"} Feb 19 10:21:15 crc kubenswrapper[4811]: I0219 10:21:15.756374 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wwf2l" event={"ID":"c796d39b-1173-4c3f-9350-672fbbfcab48","Type":"ContainerStarted","Data":"ffce8a32f72a34d2ef80807d325b40cd79c0ea646fe1513fd606f8603ca40951"} Feb 19 10:21:15 crc kubenswrapper[4811]: I0219 10:21:15.789277 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wwf2l" podStartSLOduration=3.327334977 podStartE2EDuration="5.789246689s" podCreationTimestamp="2026-02-19 10:21:10 +0000 UTC" firstStartedPulling="2026-02-19 10:21:12.720974525 +0000 UTC m=+761.247439211" lastFinishedPulling="2026-02-19 10:21:15.182886247 +0000 UTC m=+763.709350923" observedRunningTime="2026-02-19 10:21:15.786785275 +0000 UTC m=+764.313250031" watchObservedRunningTime="2026-02-19 10:21:15.789246689 +0000 UTC m=+764.315711405" Feb 19 10:21:17 crc kubenswrapper[4811]: I0219 10:21:17.413558 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-s2r8d"] Feb 19 10:21:17 crc kubenswrapper[4811]: E0219 10:21:17.413829 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee3ec824-bdc8-4428-8c43-82503b8f2760" containerName="extract" Feb 19 10:21:17 crc kubenswrapper[4811]: I0219 10:21:17.413845 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee3ec824-bdc8-4428-8c43-82503b8f2760" containerName="extract" Feb 19 10:21:17 crc kubenswrapper[4811]: E0219 10:21:17.413856 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee3ec824-bdc8-4428-8c43-82503b8f2760" containerName="pull" Feb 19 10:21:17 crc kubenswrapper[4811]: I0219 10:21:17.413865 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee3ec824-bdc8-4428-8c43-82503b8f2760" containerName="pull" Feb 19 10:21:17 crc kubenswrapper[4811]: E0219 10:21:17.413880 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee3ec824-bdc8-4428-8c43-82503b8f2760" containerName="util" Feb 19 10:21:17 crc kubenswrapper[4811]: I0219 10:21:17.413887 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee3ec824-bdc8-4428-8c43-82503b8f2760" containerName="util" Feb 19 10:21:17 crc kubenswrapper[4811]: I0219 10:21:17.414024 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee3ec824-bdc8-4428-8c43-82503b8f2760" containerName="extract" Feb 19 10:21:17 crc kubenswrapper[4811]: I0219 10:21:17.415322 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-s2r8d" Feb 19 10:21:17 crc kubenswrapper[4811]: I0219 10:21:17.419965 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 19 10:21:17 crc kubenswrapper[4811]: I0219 10:21:17.421123 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-pmgx2" Feb 19 10:21:17 crc kubenswrapper[4811]: I0219 10:21:17.421262 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 19 10:21:17 crc kubenswrapper[4811]: I0219 10:21:17.429992 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-s2r8d"] Feb 19 10:21:17 crc kubenswrapper[4811]: I0219 10:21:17.587211 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kznjt\" (UniqueName: \"kubernetes.io/projected/cc55c956-c42d-49cc-a97b-cbb28863cdfd-kube-api-access-kznjt\") pod \"nmstate-operator-694c9596b7-s2r8d\" (UID: \"cc55c956-c42d-49cc-a97b-cbb28863cdfd\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-s2r8d" Feb 19 10:21:17 crc kubenswrapper[4811]: I0219 10:21:17.689184 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kznjt\" (UniqueName: \"kubernetes.io/projected/cc55c956-c42d-49cc-a97b-cbb28863cdfd-kube-api-access-kznjt\") pod \"nmstate-operator-694c9596b7-s2r8d\" (UID: \"cc55c956-c42d-49cc-a97b-cbb28863cdfd\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-s2r8d" Feb 19 10:21:17 crc kubenswrapper[4811]: I0219 10:21:17.710504 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kznjt\" (UniqueName: \"kubernetes.io/projected/cc55c956-c42d-49cc-a97b-cbb28863cdfd-kube-api-access-kznjt\") pod \"nmstate-operator-694c9596b7-s2r8d\" (UID: \"cc55c956-c42d-49cc-a97b-cbb28863cdfd\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-s2r8d" Feb 19 10:21:17 crc kubenswrapper[4811]: I0219 10:21:17.729251 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-s2r8d" Feb 19 10:21:18 crc kubenswrapper[4811]: I0219 10:21:18.137568 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-s2r8d"] Feb 19 10:21:18 crc kubenswrapper[4811]: I0219 10:21:18.774943 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-s2r8d" event={"ID":"cc55c956-c42d-49cc-a97b-cbb28863cdfd","Type":"ContainerStarted","Data":"5a069ecf7c00107b6176d086be7c2a145354b5c36a697c0b09cb3deb8f4b7ece"} Feb 19 10:21:21 crc kubenswrapper[4811]: I0219 10:21:21.335078 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wwf2l" Feb 19 10:21:21 crc kubenswrapper[4811]: I0219 10:21:21.335131 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wwf2l" Feb 19 10:21:21 crc kubenswrapper[4811]: I0219 10:21:21.382962 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wwf2l" Feb 19 10:21:21 crc kubenswrapper[4811]: I0219 10:21:21.829281 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wwf2l" Feb 19 10:21:22 crc kubenswrapper[4811]: I0219 10:21:22.799380 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-s2r8d" event={"ID":"cc55c956-c42d-49cc-a97b-cbb28863cdfd","Type":"ContainerStarted","Data":"bd4fbc09b01310a49be50b7ec0b16b1a0589b03e733de3917c8614aa1aa83714"} Feb 19 10:21:22 crc kubenswrapper[4811]: I0219 10:21:22.820549 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-s2r8d" podStartSLOduration=2.001403716 podStartE2EDuration="5.820525182s" podCreationTimestamp="2026-02-19 10:21:17 +0000 UTC" firstStartedPulling="2026-02-19 10:21:18.149986108 +0000 UTC m=+766.676450794" lastFinishedPulling="2026-02-19 10:21:21.969107574 +0000 UTC m=+770.495572260" observedRunningTime="2026-02-19 10:21:22.818435308 +0000 UTC m=+771.344900004" watchObservedRunningTime="2026-02-19 10:21:22.820525182 +0000 UTC m=+771.346989878" Feb 19 10:21:23 crc kubenswrapper[4811]: I0219 10:21:23.571497 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wwf2l"] Feb 19 10:21:23 crc kubenswrapper[4811]: I0219 10:21:23.685534 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-xgsh7"] Feb 19 10:21:23 crc kubenswrapper[4811]: I0219 10:21:23.686874 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-xgsh7" Feb 19 10:21:23 crc kubenswrapper[4811]: I0219 10:21:23.696336 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-8v4lh" Feb 19 10:21:23 crc kubenswrapper[4811]: I0219 10:21:23.728692 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-xgsh7"] Feb 19 10:21:23 crc kubenswrapper[4811]: I0219 10:21:23.728801 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-pdhvl"] Feb 19 10:21:23 crc kubenswrapper[4811]: I0219 10:21:23.729693 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-pdhvl" Feb 19 10:21:23 crc kubenswrapper[4811]: I0219 10:21:23.742114 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 19 10:21:23 crc kubenswrapper[4811]: I0219 10:21:23.756170 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-pdhvl"] Feb 19 10:21:23 crc kubenswrapper[4811]: I0219 10:21:23.777783 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7crtb\" (UniqueName: \"kubernetes.io/projected/d74c5c61-8e57-4702-8414-0e94a5080ce5-kube-api-access-7crtb\") pod \"nmstate-metrics-58c85c668d-xgsh7\" (UID: \"d74c5c61-8e57-4702-8414-0e94a5080ce5\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-xgsh7" Feb 19 10:21:23 crc kubenswrapper[4811]: I0219 10:21:23.777856 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r68zz\" (UniqueName: \"kubernetes.io/projected/8635072a-afd3-426c-bed8-f0aa9ddf1371-kube-api-access-r68zz\") pod \"nmstate-webhook-866bcb46dc-pdhvl\" (UID: \"8635072a-afd3-426c-bed8-f0aa9ddf1371\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-pdhvl" Feb 19 10:21:23 crc kubenswrapper[4811]: I0219 10:21:23.777887 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/8635072a-afd3-426c-bed8-f0aa9ddf1371-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-pdhvl\" (UID: \"8635072a-afd3-426c-bed8-f0aa9ddf1371\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-pdhvl" Feb 19 10:21:23 crc kubenswrapper[4811]: I0219 10:21:23.789004 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-lcv4q"] Feb 19 10:21:23 crc kubenswrapper[4811]: I0219 10:21:23.790213 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-lcv4q" Feb 19 10:21:23 crc kubenswrapper[4811]: I0219 10:21:23.817197 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wwf2l" podUID="c796d39b-1173-4c3f-9350-672fbbfcab48" containerName="registry-server" containerID="cri-o://ffce8a32f72a34d2ef80807d325b40cd79c0ea646fe1513fd606f8603ca40951" gracePeriod=2 Feb 19 10:21:23 crc kubenswrapper[4811]: I0219 10:21:23.878968 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7crtb\" (UniqueName: \"kubernetes.io/projected/d74c5c61-8e57-4702-8414-0e94a5080ce5-kube-api-access-7crtb\") pod \"nmstate-metrics-58c85c668d-xgsh7\" (UID: \"d74c5c61-8e57-4702-8414-0e94a5080ce5\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-xgsh7" Feb 19 10:21:23 crc kubenswrapper[4811]: I0219 10:21:23.879120 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r68zz\" (UniqueName: \"kubernetes.io/projected/8635072a-afd3-426c-bed8-f0aa9ddf1371-kube-api-access-r68zz\") pod \"nmstate-webhook-866bcb46dc-pdhvl\" (UID: \"8635072a-afd3-426c-bed8-f0aa9ddf1371\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-pdhvl" Feb 19 10:21:23 crc kubenswrapper[4811]: I0219 10:21:23.879230 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/56faa147-2a2c-437e-a221-88fa87e89a56-nmstate-lock\") pod \"nmstate-handler-lcv4q\" (UID: \"56faa147-2a2c-437e-a221-88fa87e89a56\") " pod="openshift-nmstate/nmstate-handler-lcv4q" Feb 19 10:21:23 crc kubenswrapper[4811]: I0219 10:21:23.879330 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/8635072a-afd3-426c-bed8-f0aa9ddf1371-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-pdhvl\" (UID: \"8635072a-afd3-426c-bed8-f0aa9ddf1371\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-pdhvl" Feb 19 10:21:23 crc kubenswrapper[4811]: I0219 10:21:23.879418 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/56faa147-2a2c-437e-a221-88fa87e89a56-dbus-socket\") pod \"nmstate-handler-lcv4q\" (UID: \"56faa147-2a2c-437e-a221-88fa87e89a56\") " pod="openshift-nmstate/nmstate-handler-lcv4q" Feb 19 10:21:23 crc kubenswrapper[4811]: I0219 10:21:23.879528 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzkkq\" (UniqueName: \"kubernetes.io/projected/56faa147-2a2c-437e-a221-88fa87e89a56-kube-api-access-bzkkq\") pod \"nmstate-handler-lcv4q\" (UID: \"56faa147-2a2c-437e-a221-88fa87e89a56\") " pod="openshift-nmstate/nmstate-handler-lcv4q" Feb 19 10:21:23 crc kubenswrapper[4811]: E0219 10:21:23.879585 4811 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 19 10:21:23 crc kubenswrapper[4811]: I0219 10:21:23.879645 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/56faa147-2a2c-437e-a221-88fa87e89a56-ovs-socket\") pod \"nmstate-handler-lcv4q\" (UID: \"56faa147-2a2c-437e-a221-88fa87e89a56\") " pod="openshift-nmstate/nmstate-handler-lcv4q" Feb 19 10:21:23 crc kubenswrapper[4811]: E0219 10:21:23.879805 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8635072a-afd3-426c-bed8-f0aa9ddf1371-tls-key-pair podName:8635072a-afd3-426c-bed8-f0aa9ddf1371 nodeName:}" failed. No retries permitted until 2026-02-19 10:21:24.379712472 +0000 UTC m=+772.906177158 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/8635072a-afd3-426c-bed8-f0aa9ddf1371-tls-key-pair") pod "nmstate-webhook-866bcb46dc-pdhvl" (UID: "8635072a-afd3-426c-bed8-f0aa9ddf1371") : secret "openshift-nmstate-webhook" not found Feb 19 10:21:23 crc kubenswrapper[4811]: I0219 10:21:23.892199 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4lhw2"] Feb 19 10:21:23 crc kubenswrapper[4811]: I0219 10:21:23.893349 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4lhw2" Feb 19 10:21:23 crc kubenswrapper[4811]: I0219 10:21:23.901764 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 19 10:21:23 crc kubenswrapper[4811]: I0219 10:21:23.902072 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 19 10:21:23 crc kubenswrapper[4811]: I0219 10:21:23.904468 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-cl7s5" Feb 19 10:21:23 crc kubenswrapper[4811]: I0219 10:21:23.905584 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7crtb\" (UniqueName: \"kubernetes.io/projected/d74c5c61-8e57-4702-8414-0e94a5080ce5-kube-api-access-7crtb\") pod \"nmstate-metrics-58c85c668d-xgsh7\" (UID: \"d74c5c61-8e57-4702-8414-0e94a5080ce5\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-xgsh7" Feb 19 10:21:23 crc kubenswrapper[4811]: I0219 10:21:23.912817 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r68zz\" (UniqueName: \"kubernetes.io/projected/8635072a-afd3-426c-bed8-f0aa9ddf1371-kube-api-access-r68zz\") pod \"nmstate-webhook-866bcb46dc-pdhvl\" (UID: \"8635072a-afd3-426c-bed8-f0aa9ddf1371\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-pdhvl" Feb 19 10:21:23 crc kubenswrapper[4811]: I0219 10:21:23.914939 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4lhw2"] Feb 19 10:21:23 crc kubenswrapper[4811]: I0219 10:21:23.981463 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c44tj\" (UniqueName: \"kubernetes.io/projected/4f52fbce-dab0-4cda-b2c7-80a4caab9fff-kube-api-access-c44tj\") pod \"nmstate-console-plugin-5c78fc5d65-4lhw2\" (UID: \"4f52fbce-dab0-4cda-b2c7-80a4caab9fff\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4lhw2" Feb 19 10:21:23 crc kubenswrapper[4811]: I0219 10:21:23.982070 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4f52fbce-dab0-4cda-b2c7-80a4caab9fff-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-4lhw2\" (UID: \"4f52fbce-dab0-4cda-b2c7-80a4caab9fff\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4lhw2" Feb 19 10:21:23 crc kubenswrapper[4811]: I0219 10:21:23.982158 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/56faa147-2a2c-437e-a221-88fa87e89a56-nmstate-lock\") pod \"nmstate-handler-lcv4q\" (UID: \"56faa147-2a2c-437e-a221-88fa87e89a56\") " pod="openshift-nmstate/nmstate-handler-lcv4q" Feb 19 10:21:23 crc kubenswrapper[4811]: I0219 10:21:23.982246 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/56faa147-2a2c-437e-a221-88fa87e89a56-dbus-socket\") pod \"nmstate-handler-lcv4q\" (UID: \"56faa147-2a2c-437e-a221-88fa87e89a56\") " pod="openshift-nmstate/nmstate-handler-lcv4q" Feb 19 10:21:23 crc kubenswrapper[4811]: I0219 10:21:23.982333 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzkkq\" (UniqueName: \"kubernetes.io/projected/56faa147-2a2c-437e-a221-88fa87e89a56-kube-api-access-bzkkq\") pod \"nmstate-handler-lcv4q\" (UID: \"56faa147-2a2c-437e-a221-88fa87e89a56\") " pod="openshift-nmstate/nmstate-handler-lcv4q" Feb 19 10:21:23 crc kubenswrapper[4811]: I0219 10:21:23.982399 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4f52fbce-dab0-4cda-b2c7-80a4caab9fff-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-4lhw2\" (UID: \"4f52fbce-dab0-4cda-b2c7-80a4caab9fff\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4lhw2" Feb 19 10:21:23 crc kubenswrapper[4811]: I0219 10:21:23.982492 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/56faa147-2a2c-437e-a221-88fa87e89a56-ovs-socket\") pod \"nmstate-handler-lcv4q\" (UID: \"56faa147-2a2c-437e-a221-88fa87e89a56\") " pod="openshift-nmstate/nmstate-handler-lcv4q" Feb 19 10:21:23 crc kubenswrapper[4811]: I0219 10:21:23.982602 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/56faa147-2a2c-437e-a221-88fa87e89a56-nmstate-lock\") pod \"nmstate-handler-lcv4q\" (UID: \"56faa147-2a2c-437e-a221-88fa87e89a56\") " pod="openshift-nmstate/nmstate-handler-lcv4q" Feb 19 10:21:23 crc kubenswrapper[4811]: I0219 10:21:23.982778 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/56faa147-2a2c-437e-a221-88fa87e89a56-ovs-socket\") pod \"nmstate-handler-lcv4q\" (UID: \"56faa147-2a2c-437e-a221-88fa87e89a56\") " pod="openshift-nmstate/nmstate-handler-lcv4q" Feb 19 10:21:23 crc kubenswrapper[4811]: I0219 10:21:23.983300 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/56faa147-2a2c-437e-a221-88fa87e89a56-dbus-socket\") pod \"nmstate-handler-lcv4q\" (UID: \"56faa147-2a2c-437e-a221-88fa87e89a56\") " pod="openshift-nmstate/nmstate-handler-lcv4q" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.012790 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-xgsh7" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.019591 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzkkq\" (UniqueName: \"kubernetes.io/projected/56faa147-2a2c-437e-a221-88fa87e89a56-kube-api-access-bzkkq\") pod \"nmstate-handler-lcv4q\" (UID: \"56faa147-2a2c-437e-a221-88fa87e89a56\") " pod="openshift-nmstate/nmstate-handler-lcv4q" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.083584 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c44tj\" (UniqueName: \"kubernetes.io/projected/4f52fbce-dab0-4cda-b2c7-80a4caab9fff-kube-api-access-c44tj\") pod \"nmstate-console-plugin-5c78fc5d65-4lhw2\" (UID: \"4f52fbce-dab0-4cda-b2c7-80a4caab9fff\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4lhw2" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.083728 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4f52fbce-dab0-4cda-b2c7-80a4caab9fff-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-4lhw2\" (UID: \"4f52fbce-dab0-4cda-b2c7-80a4caab9fff\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4lhw2" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.083856 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4f52fbce-dab0-4cda-b2c7-80a4caab9fff-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-4lhw2\" (UID: \"4f52fbce-dab0-4cda-b2c7-80a4caab9fff\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4lhw2" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.084784 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4f52fbce-dab0-4cda-b2c7-80a4caab9fff-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-4lhw2\" (UID: \"4f52fbce-dab0-4cda-b2c7-80a4caab9fff\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4lhw2" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.094718 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4f52fbce-dab0-4cda-b2c7-80a4caab9fff-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-4lhw2\" (UID: \"4f52fbce-dab0-4cda-b2c7-80a4caab9fff\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4lhw2" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.106545 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c44tj\" (UniqueName: \"kubernetes.io/projected/4f52fbce-dab0-4cda-b2c7-80a4caab9fff-kube-api-access-c44tj\") pod \"nmstate-console-plugin-5c78fc5d65-4lhw2\" (UID: \"4f52fbce-dab0-4cda-b2c7-80a4caab9fff\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4lhw2" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.130005 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-lcv4q" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.143690 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7458d57f55-mngr4"] Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.144636 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7458d57f55-mngr4" Feb 19 10:21:24 crc kubenswrapper[4811]: W0219 10:21:24.153263 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56faa147_2a2c_437e_a221_88fa87e89a56.slice/crio-27d625cceb9135bf2ac1143fcad2689de64708b7e77966abdc55594d92769b5b WatchSource:0}: Error finding container 27d625cceb9135bf2ac1143fcad2689de64708b7e77966abdc55594d92769b5b: Status 404 returned error can't find the container with id 27d625cceb9135bf2ac1143fcad2689de64708b7e77966abdc55594d92769b5b Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.160333 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7458d57f55-mngr4"] Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.185969 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fa60507d-a3c1-4707-84c1-890d146ab26d-service-ca\") pod \"console-7458d57f55-mngr4\" (UID: \"fa60507d-a3c1-4707-84c1-890d146ab26d\") " pod="openshift-console/console-7458d57f55-mngr4" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.186037 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w77qv\" (UniqueName: \"kubernetes.io/projected/fa60507d-a3c1-4707-84c1-890d146ab26d-kube-api-access-w77qv\") pod \"console-7458d57f55-mngr4\" (UID: \"fa60507d-a3c1-4707-84c1-890d146ab26d\") " pod="openshift-console/console-7458d57f55-mngr4" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.186223 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa60507d-a3c1-4707-84c1-890d146ab26d-trusted-ca-bundle\") pod \"console-7458d57f55-mngr4\" (UID: \"fa60507d-a3c1-4707-84c1-890d146ab26d\") " pod="openshift-console/console-7458d57f55-mngr4" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.186334 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fa60507d-a3c1-4707-84c1-890d146ab26d-oauth-serving-cert\") pod \"console-7458d57f55-mngr4\" (UID: \"fa60507d-a3c1-4707-84c1-890d146ab26d\") " pod="openshift-console/console-7458d57f55-mngr4" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.186412 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fa60507d-a3c1-4707-84c1-890d146ab26d-console-oauth-config\") pod \"console-7458d57f55-mngr4\" (UID: \"fa60507d-a3c1-4707-84c1-890d146ab26d\") " pod="openshift-console/console-7458d57f55-mngr4" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.186612 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fa60507d-a3c1-4707-84c1-890d146ab26d-console-config\") pod \"console-7458d57f55-mngr4\" (UID: \"fa60507d-a3c1-4707-84c1-890d146ab26d\") " pod="openshift-console/console-7458d57f55-mngr4" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.186656 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fa60507d-a3c1-4707-84c1-890d146ab26d-console-serving-cert\") pod \"console-7458d57f55-mngr4\" (UID: \"fa60507d-a3c1-4707-84c1-890d146ab26d\") " pod="openshift-console/console-7458d57f55-mngr4" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.196529 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wwf2l" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.276906 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4lhw2" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.287561 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vx7rd\" (UniqueName: \"kubernetes.io/projected/c796d39b-1173-4c3f-9350-672fbbfcab48-kube-api-access-vx7rd\") pod \"c796d39b-1173-4c3f-9350-672fbbfcab48\" (UID: \"c796d39b-1173-4c3f-9350-672fbbfcab48\") " Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.287691 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c796d39b-1173-4c3f-9350-672fbbfcab48-catalog-content\") pod \"c796d39b-1173-4c3f-9350-672fbbfcab48\" (UID: \"c796d39b-1173-4c3f-9350-672fbbfcab48\") " Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.287732 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c796d39b-1173-4c3f-9350-672fbbfcab48-utilities\") pod \"c796d39b-1173-4c3f-9350-672fbbfcab48\" (UID: \"c796d39b-1173-4c3f-9350-672fbbfcab48\") " Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.287971 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fa60507d-a3c1-4707-84c1-890d146ab26d-service-ca\") pod \"console-7458d57f55-mngr4\" (UID: \"fa60507d-a3c1-4707-84c1-890d146ab26d\") " pod="openshift-console/console-7458d57f55-mngr4" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.288053 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w77qv\" (UniqueName: \"kubernetes.io/projected/fa60507d-a3c1-4707-84c1-890d146ab26d-kube-api-access-w77qv\") pod \"console-7458d57f55-mngr4\" (UID: \"fa60507d-a3c1-4707-84c1-890d146ab26d\") " pod="openshift-console/console-7458d57f55-mngr4" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.288167 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa60507d-a3c1-4707-84c1-890d146ab26d-trusted-ca-bundle\") pod \"console-7458d57f55-mngr4\" (UID: \"fa60507d-a3c1-4707-84c1-890d146ab26d\") " pod="openshift-console/console-7458d57f55-mngr4" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.288260 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fa60507d-a3c1-4707-84c1-890d146ab26d-oauth-serving-cert\") pod \"console-7458d57f55-mngr4\" (UID: \"fa60507d-a3c1-4707-84c1-890d146ab26d\") " pod="openshift-console/console-7458d57f55-mngr4" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.288375 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fa60507d-a3c1-4707-84c1-890d146ab26d-console-oauth-config\") pod \"console-7458d57f55-mngr4\" (UID: \"fa60507d-a3c1-4707-84c1-890d146ab26d\") " pod="openshift-console/console-7458d57f55-mngr4" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.288434 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fa60507d-a3c1-4707-84c1-890d146ab26d-console-config\") pod \"console-7458d57f55-mngr4\" (UID: \"fa60507d-a3c1-4707-84c1-890d146ab26d\") " pod="openshift-console/console-7458d57f55-mngr4" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.288510 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fa60507d-a3c1-4707-84c1-890d146ab26d-console-serving-cert\") pod \"console-7458d57f55-mngr4\" (UID: \"fa60507d-a3c1-4707-84c1-890d146ab26d\") " pod="openshift-console/console-7458d57f55-mngr4" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.290135 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fa60507d-a3c1-4707-84c1-890d146ab26d-console-config\") pod \"console-7458d57f55-mngr4\" (UID: \"fa60507d-a3c1-4707-84c1-890d146ab26d\") " pod="openshift-console/console-7458d57f55-mngr4" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.290203 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fa60507d-a3c1-4707-84c1-890d146ab26d-oauth-serving-cert\") pod \"console-7458d57f55-mngr4\" (UID: \"fa60507d-a3c1-4707-84c1-890d146ab26d\") " pod="openshift-console/console-7458d57f55-mngr4" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.290259 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fa60507d-a3c1-4707-84c1-890d146ab26d-service-ca\") pod \"console-7458d57f55-mngr4\" (UID: \"fa60507d-a3c1-4707-84c1-890d146ab26d\") " pod="openshift-console/console-7458d57f55-mngr4" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.290682 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa60507d-a3c1-4707-84c1-890d146ab26d-trusted-ca-bundle\") pod \"console-7458d57f55-mngr4\" (UID: \"fa60507d-a3c1-4707-84c1-890d146ab26d\") " pod="openshift-console/console-7458d57f55-mngr4" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.290832 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c796d39b-1173-4c3f-9350-672fbbfcab48-utilities" (OuterVolumeSpecName: "utilities") pod "c796d39b-1173-4c3f-9350-672fbbfcab48" (UID: "c796d39b-1173-4c3f-9350-672fbbfcab48"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.296263 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fa60507d-a3c1-4707-84c1-890d146ab26d-console-oauth-config\") pod \"console-7458d57f55-mngr4\" (UID: \"fa60507d-a3c1-4707-84c1-890d146ab26d\") " pod="openshift-console/console-7458d57f55-mngr4" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.300742 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c796d39b-1173-4c3f-9350-672fbbfcab48-kube-api-access-vx7rd" (OuterVolumeSpecName: "kube-api-access-vx7rd") pod "c796d39b-1173-4c3f-9350-672fbbfcab48" (UID: "c796d39b-1173-4c3f-9350-672fbbfcab48"). InnerVolumeSpecName "kube-api-access-vx7rd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.301095 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fa60507d-a3c1-4707-84c1-890d146ab26d-console-serving-cert\") pod \"console-7458d57f55-mngr4\" (UID: \"fa60507d-a3c1-4707-84c1-890d146ab26d\") " pod="openshift-console/console-7458d57f55-mngr4" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.308528 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w77qv\" (UniqueName: \"kubernetes.io/projected/fa60507d-a3c1-4707-84c1-890d146ab26d-kube-api-access-w77qv\") pod \"console-7458d57f55-mngr4\" (UID: \"fa60507d-a3c1-4707-84c1-890d146ab26d\") " pod="openshift-console/console-7458d57f55-mngr4" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.359692 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-xgsh7"] Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.390441 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/8635072a-afd3-426c-bed8-f0aa9ddf1371-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-pdhvl\" (UID: \"8635072a-afd3-426c-bed8-f0aa9ddf1371\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-pdhvl" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.390569 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c796d39b-1173-4c3f-9350-672fbbfcab48-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.390591 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vx7rd\" (UniqueName: \"kubernetes.io/projected/c796d39b-1173-4c3f-9350-672fbbfcab48-kube-api-access-vx7rd\") on node \"crc\" DevicePath \"\"" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.396765 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/8635072a-afd3-426c-bed8-f0aa9ddf1371-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-pdhvl\" (UID: \"8635072a-afd3-426c-bed8-f0aa9ddf1371\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-pdhvl" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.478119 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7458d57f55-mngr4" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.657092 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c796d39b-1173-4c3f-9350-672fbbfcab48-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c796d39b-1173-4c3f-9350-672fbbfcab48" (UID: "c796d39b-1173-4c3f-9350-672fbbfcab48"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.657616 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-pdhvl" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.693728 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c796d39b-1173-4c3f-9350-672fbbfcab48-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.735281 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4lhw2"] Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.825475 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4lhw2" event={"ID":"4f52fbce-dab0-4cda-b2c7-80a4caab9fff","Type":"ContainerStarted","Data":"f1a5975bcc13441d23d08b95bd68500cf210ecd53736b46d17318e7309c90363"} Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.827720 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-xgsh7" event={"ID":"d74c5c61-8e57-4702-8414-0e94a5080ce5","Type":"ContainerStarted","Data":"44cc48bde0beb862effab1eaad710c530479e04cad6b565299a49b8aeae3abfd"} Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.833342 4811 generic.go:334] "Generic (PLEG): container finished" podID="c796d39b-1173-4c3f-9350-672fbbfcab48" containerID="ffce8a32f72a34d2ef80807d325b40cd79c0ea646fe1513fd606f8603ca40951" exitCode=0 Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.833502 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wwf2l" event={"ID":"c796d39b-1173-4c3f-9350-672fbbfcab48","Type":"ContainerDied","Data":"ffce8a32f72a34d2ef80807d325b40cd79c0ea646fe1513fd606f8603ca40951"} Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.833608 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wwf2l" event={"ID":"c796d39b-1173-4c3f-9350-672fbbfcab48","Type":"ContainerDied","Data":"b75464b7af277daf90e9c1d5315786a1e865b89faafdd3faa6097944e251e028"} Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.833640 4811 scope.go:117] "RemoveContainer" containerID="ffce8a32f72a34d2ef80807d325b40cd79c0ea646fe1513fd606f8603ca40951" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.833636 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wwf2l" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.837702 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-lcv4q" event={"ID":"56faa147-2a2c-437e-a221-88fa87e89a56","Type":"ContainerStarted","Data":"27d625cceb9135bf2ac1143fcad2689de64708b7e77966abdc55594d92769b5b"} Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.847871 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-pdhvl"] Feb 19 10:21:24 crc kubenswrapper[4811]: W0219 10:21:24.855987 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8635072a_afd3_426c_bed8_f0aa9ddf1371.slice/crio-9af46ca1ad9dd0c40e041e5a821971a9e50afe2d9881091c1830835cbc12914d WatchSource:0}: Error finding container 9af46ca1ad9dd0c40e041e5a821971a9e50afe2d9881091c1830835cbc12914d: Status 404 returned error can't find the container with id 9af46ca1ad9dd0c40e041e5a821971a9e50afe2d9881091c1830835cbc12914d Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.864820 4811 scope.go:117] "RemoveContainer" containerID="139d13735460a70564a8ca11546211b2a9af979c44754dba3196a640e53fb0ea" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.881576 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wwf2l"] Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.891035 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7458d57f55-mngr4"] Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.902059 4811 scope.go:117] "RemoveContainer" containerID="8aa643c4d1b5970b1c261b918b6b86b0dae0d14ccba0e6e1186cc601dd22f626" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.904656 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wwf2l"] Feb 19 10:21:24 crc kubenswrapper[4811]: W0219 10:21:24.906661 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa60507d_a3c1_4707_84c1_890d146ab26d.slice/crio-122adff5845d2e49dca7f811dfdfecc62f09e7ea36f7bfd08a507d80c151eaee WatchSource:0}: Error finding container 122adff5845d2e49dca7f811dfdfecc62f09e7ea36f7bfd08a507d80c151eaee: Status 404 returned error can't find the container with id 122adff5845d2e49dca7f811dfdfecc62f09e7ea36f7bfd08a507d80c151eaee Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.936126 4811 scope.go:117] "RemoveContainer" containerID="ffce8a32f72a34d2ef80807d325b40cd79c0ea646fe1513fd606f8603ca40951" Feb 19 10:21:24 crc kubenswrapper[4811]: E0219 10:21:24.937302 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffce8a32f72a34d2ef80807d325b40cd79c0ea646fe1513fd606f8603ca40951\": container with ID starting with ffce8a32f72a34d2ef80807d325b40cd79c0ea646fe1513fd606f8603ca40951 not found: ID does not exist" containerID="ffce8a32f72a34d2ef80807d325b40cd79c0ea646fe1513fd606f8603ca40951" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.937391 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffce8a32f72a34d2ef80807d325b40cd79c0ea646fe1513fd606f8603ca40951"} err="failed to get container status \"ffce8a32f72a34d2ef80807d325b40cd79c0ea646fe1513fd606f8603ca40951\": rpc error: code = NotFound desc = could not find container \"ffce8a32f72a34d2ef80807d325b40cd79c0ea646fe1513fd606f8603ca40951\": container with ID starting with ffce8a32f72a34d2ef80807d325b40cd79c0ea646fe1513fd606f8603ca40951 not found: ID does not exist" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.937475 4811 scope.go:117] "RemoveContainer" containerID="139d13735460a70564a8ca11546211b2a9af979c44754dba3196a640e53fb0ea" Feb 19 10:21:24 crc kubenswrapper[4811]: E0219 10:21:24.938348 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"139d13735460a70564a8ca11546211b2a9af979c44754dba3196a640e53fb0ea\": container with ID starting with 139d13735460a70564a8ca11546211b2a9af979c44754dba3196a640e53fb0ea not found: ID does not exist" containerID="139d13735460a70564a8ca11546211b2a9af979c44754dba3196a640e53fb0ea" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.938377 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"139d13735460a70564a8ca11546211b2a9af979c44754dba3196a640e53fb0ea"} err="failed to get container status \"139d13735460a70564a8ca11546211b2a9af979c44754dba3196a640e53fb0ea\": rpc error: code = NotFound desc = could not find container \"139d13735460a70564a8ca11546211b2a9af979c44754dba3196a640e53fb0ea\": container with ID starting with 139d13735460a70564a8ca11546211b2a9af979c44754dba3196a640e53fb0ea not found: ID does not exist" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.938398 4811 scope.go:117] "RemoveContainer" containerID="8aa643c4d1b5970b1c261b918b6b86b0dae0d14ccba0e6e1186cc601dd22f626" Feb 19 10:21:24 crc kubenswrapper[4811]: E0219 10:21:24.938755 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8aa643c4d1b5970b1c261b918b6b86b0dae0d14ccba0e6e1186cc601dd22f626\": container with ID starting with 8aa643c4d1b5970b1c261b918b6b86b0dae0d14ccba0e6e1186cc601dd22f626 not found: ID does not exist" containerID="8aa643c4d1b5970b1c261b918b6b86b0dae0d14ccba0e6e1186cc601dd22f626" Feb 19 10:21:24 crc kubenswrapper[4811]: I0219 10:21:24.938785 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8aa643c4d1b5970b1c261b918b6b86b0dae0d14ccba0e6e1186cc601dd22f626"} err="failed to get container status \"8aa643c4d1b5970b1c261b918b6b86b0dae0d14ccba0e6e1186cc601dd22f626\": rpc error: code = NotFound desc = could not find container \"8aa643c4d1b5970b1c261b918b6b86b0dae0d14ccba0e6e1186cc601dd22f626\": container with ID starting with 8aa643c4d1b5970b1c261b918b6b86b0dae0d14ccba0e6e1186cc601dd22f626 not found: ID does not exist" Feb 19 10:21:25 crc kubenswrapper[4811]: I0219 10:21:25.848846 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7458d57f55-mngr4" event={"ID":"fa60507d-a3c1-4707-84c1-890d146ab26d","Type":"ContainerStarted","Data":"96431cb421b20c644a8dec6c31afaf145daf7604518150a91a668f3721e7ea21"} Feb 19 10:21:25 crc kubenswrapper[4811]: I0219 10:21:25.848900 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7458d57f55-mngr4" event={"ID":"fa60507d-a3c1-4707-84c1-890d146ab26d","Type":"ContainerStarted","Data":"122adff5845d2e49dca7f811dfdfecc62f09e7ea36f7bfd08a507d80c151eaee"} Feb 19 10:21:25 crc kubenswrapper[4811]: I0219 10:21:25.850891 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-pdhvl" event={"ID":"8635072a-afd3-426c-bed8-f0aa9ddf1371","Type":"ContainerStarted","Data":"9af46ca1ad9dd0c40e041e5a821971a9e50afe2d9881091c1830835cbc12914d"} Feb 19 10:21:25 crc kubenswrapper[4811]: I0219 10:21:25.872719 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7458d57f55-mngr4" podStartSLOduration=1.872698159 podStartE2EDuration="1.872698159s" podCreationTimestamp="2026-02-19 10:21:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:21:25.871635081 +0000 UTC m=+774.398099787" watchObservedRunningTime="2026-02-19 10:21:25.872698159 +0000 UTC m=+774.399162845" Feb 19 10:21:26 crc kubenswrapper[4811]: I0219 10:21:26.594832 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c796d39b-1173-4c3f-9350-672fbbfcab48" path="/var/lib/kubelet/pods/c796d39b-1173-4c3f-9350-672fbbfcab48/volumes" Feb 19 10:21:26 crc kubenswrapper[4811]: I0219 10:21:26.788352 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:21:26 crc kubenswrapper[4811]: I0219 10:21:26.788442 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:21:27 crc kubenswrapper[4811]: I0219 10:21:27.867504 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-pdhvl" event={"ID":"8635072a-afd3-426c-bed8-f0aa9ddf1371","Type":"ContainerStarted","Data":"2d0e22b7e6cf5bba22031e64dea64e114b13803ede9fc2df486220ec17645c93"} Feb 19 10:21:27 crc kubenswrapper[4811]: I0219 10:21:27.868424 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-pdhvl" Feb 19 10:21:27 crc kubenswrapper[4811]: I0219 10:21:27.869982 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4lhw2" event={"ID":"4f52fbce-dab0-4cda-b2c7-80a4caab9fff","Type":"ContainerStarted","Data":"068f31bdba32e412f67169985d417a09e140cec35a871960e74844111c6d29f8"} Feb 19 10:21:27 crc kubenswrapper[4811]: I0219 10:21:27.873498 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-xgsh7" event={"ID":"d74c5c61-8e57-4702-8414-0e94a5080ce5","Type":"ContainerStarted","Data":"8f2478fa8380cae318b272b1e8f8f99067bf968bd0cd06333f40a12470e756f4"} Feb 19 10:21:27 crc kubenswrapper[4811]: I0219 10:21:27.876121 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-lcv4q" event={"ID":"56faa147-2a2c-437e-a221-88fa87e89a56","Type":"ContainerStarted","Data":"885f46c0eae5c28587a7c94a947e21155b80b7d68b21f565b5c386f01b95f65d"} Feb 19 10:21:27 crc kubenswrapper[4811]: I0219 10:21:27.876297 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-lcv4q" Feb 19 10:21:27 crc kubenswrapper[4811]: I0219 10:21:27.895731 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-pdhvl" podStartSLOduration=2.715595704 podStartE2EDuration="4.895688345s" podCreationTimestamp="2026-02-19 10:21:23 +0000 UTC" firstStartedPulling="2026-02-19 10:21:24.864766139 +0000 UTC m=+773.391230825" lastFinishedPulling="2026-02-19 10:21:27.04485876 +0000 UTC m=+775.571323466" observedRunningTime="2026-02-19 10:21:27.889608577 +0000 UTC m=+776.416073263" watchObservedRunningTime="2026-02-19 10:21:27.895688345 +0000 UTC m=+776.422153031" Feb 19 10:21:27 crc kubenswrapper[4811]: I0219 10:21:27.914761 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4lhw2" podStartSLOduration=2.633940376 podStartE2EDuration="4.914721439s" podCreationTimestamp="2026-02-19 10:21:23 +0000 UTC" firstStartedPulling="2026-02-19 10:21:24.757747203 +0000 UTC m=+773.284211889" lastFinishedPulling="2026-02-19 10:21:27.038528236 +0000 UTC m=+775.564992952" observedRunningTime="2026-02-19 10:21:27.905609212 +0000 UTC m=+776.432073908" watchObservedRunningTime="2026-02-19 10:21:27.914721439 +0000 UTC m=+776.441186125" Feb 19 10:21:27 crc kubenswrapper[4811]: I0219 10:21:27.931139 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-lcv4q" podStartSLOduration=2.045431627 podStartE2EDuration="4.931087133s" podCreationTimestamp="2026-02-19 10:21:23 +0000 UTC" firstStartedPulling="2026-02-19 10:21:24.156542115 +0000 UTC m=+772.683006801" lastFinishedPulling="2026-02-19 10:21:27.042197621 +0000 UTC m=+775.568662307" observedRunningTime="2026-02-19 10:21:27.924271107 +0000 UTC m=+776.450735803" watchObservedRunningTime="2026-02-19 10:21:27.931087133 +0000 UTC m=+776.457551829" Feb 19 10:21:29 crc kubenswrapper[4811]: I0219 10:21:29.892424 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-xgsh7" event={"ID":"d74c5c61-8e57-4702-8414-0e94a5080ce5","Type":"ContainerStarted","Data":"478125ac16dd33fd1f933251263a10da62ab6a1e6c91199d096fdf61a0f50e32"} Feb 19 10:21:29 crc kubenswrapper[4811]: I0219 10:21:29.912892 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-xgsh7" podStartSLOduration=2.121595422 podStartE2EDuration="6.912865409s" podCreationTimestamp="2026-02-19 10:21:23 +0000 UTC" firstStartedPulling="2026-02-19 10:21:24.365876866 +0000 UTC m=+772.892341552" lastFinishedPulling="2026-02-19 10:21:29.157146853 +0000 UTC m=+777.683611539" observedRunningTime="2026-02-19 10:21:29.911634087 +0000 UTC m=+778.438098793" watchObservedRunningTime="2026-02-19 10:21:29.912865409 +0000 UTC m=+778.439330085" Feb 19 10:21:34 crc kubenswrapper[4811]: I0219 10:21:34.161746 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-lcv4q" Feb 19 10:21:34 crc kubenswrapper[4811]: I0219 10:21:34.478722 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7458d57f55-mngr4" Feb 19 10:21:34 crc kubenswrapper[4811]: I0219 10:21:34.479116 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7458d57f55-mngr4" Feb 19 10:21:34 crc kubenswrapper[4811]: I0219 10:21:34.485021 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7458d57f55-mngr4" Feb 19 10:21:34 crc kubenswrapper[4811]: I0219 10:21:34.931150 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7458d57f55-mngr4" Feb 19 10:21:34 crc kubenswrapper[4811]: I0219 10:21:34.994092 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-dh45w"] Feb 19 10:21:44 crc kubenswrapper[4811]: I0219 10:21:44.664743 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-pdhvl" Feb 19 10:21:56 crc kubenswrapper[4811]: I0219 10:21:56.787939 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:21:56 crc kubenswrapper[4811]: I0219 10:21:56.788648 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:21:56 crc kubenswrapper[4811]: I0219 10:21:56.937621 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv"] Feb 19 10:21:56 crc kubenswrapper[4811]: E0219 10:21:56.938009 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c796d39b-1173-4c3f-9350-672fbbfcab48" containerName="extract-content" Feb 19 10:21:56 crc kubenswrapper[4811]: I0219 10:21:56.938069 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="c796d39b-1173-4c3f-9350-672fbbfcab48" containerName="extract-content" Feb 19 10:21:56 crc kubenswrapper[4811]: E0219 10:21:56.938094 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c796d39b-1173-4c3f-9350-672fbbfcab48" containerName="extract-utilities" Feb 19 10:21:56 crc kubenswrapper[4811]: I0219 10:21:56.938114 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="c796d39b-1173-4c3f-9350-672fbbfcab48" containerName="extract-utilities" Feb 19 10:21:56 crc kubenswrapper[4811]: E0219 10:21:56.938154 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c796d39b-1173-4c3f-9350-672fbbfcab48" containerName="registry-server" Feb 19 10:21:56 crc kubenswrapper[4811]: I0219 10:21:56.938171 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="c796d39b-1173-4c3f-9350-672fbbfcab48" containerName="registry-server" Feb 19 10:21:56 crc kubenswrapper[4811]: I0219 10:21:56.938372 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="c796d39b-1173-4c3f-9350-672fbbfcab48" containerName="registry-server" Feb 19 10:21:56 crc kubenswrapper[4811]: I0219 10:21:56.939952 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv" Feb 19 10:21:56 crc kubenswrapper[4811]: I0219 10:21:56.943552 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 10:21:56 crc kubenswrapper[4811]: I0219 10:21:56.956625 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv"] Feb 19 10:21:57 crc kubenswrapper[4811]: I0219 10:21:57.073231 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2dfded96-366b-42d7-b04a-bfa8e4a203a8-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv\" (UID: \"2dfded96-366b-42d7-b04a-bfa8e4a203a8\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv" Feb 19 10:21:57 crc kubenswrapper[4811]: I0219 10:21:57.073588 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cj29\" (UniqueName: \"kubernetes.io/projected/2dfded96-366b-42d7-b04a-bfa8e4a203a8-kube-api-access-6cj29\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv\" (UID: \"2dfded96-366b-42d7-b04a-bfa8e4a203a8\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv" Feb 19 10:21:57 crc kubenswrapper[4811]: I0219 10:21:57.073845 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2dfded96-366b-42d7-b04a-bfa8e4a203a8-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv\" (UID: \"2dfded96-366b-42d7-b04a-bfa8e4a203a8\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv" Feb 19 10:21:57 crc kubenswrapper[4811]: I0219 10:21:57.175645 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cj29\" (UniqueName: \"kubernetes.io/projected/2dfded96-366b-42d7-b04a-bfa8e4a203a8-kube-api-access-6cj29\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv\" (UID: \"2dfded96-366b-42d7-b04a-bfa8e4a203a8\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv" Feb 19 10:21:57 crc kubenswrapper[4811]: I0219 10:21:57.175716 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2dfded96-366b-42d7-b04a-bfa8e4a203a8-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv\" (UID: \"2dfded96-366b-42d7-b04a-bfa8e4a203a8\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv" Feb 19 10:21:57 crc kubenswrapper[4811]: I0219 10:21:57.175743 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2dfded96-366b-42d7-b04a-bfa8e4a203a8-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv\" (UID: \"2dfded96-366b-42d7-b04a-bfa8e4a203a8\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv" Feb 19 10:21:57 crc kubenswrapper[4811]: I0219 10:21:57.176203 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2dfded96-366b-42d7-b04a-bfa8e4a203a8-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv\" (UID: \"2dfded96-366b-42d7-b04a-bfa8e4a203a8\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv" Feb 19 10:21:57 crc kubenswrapper[4811]: I0219 10:21:57.176416 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2dfded96-366b-42d7-b04a-bfa8e4a203a8-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv\" (UID: \"2dfded96-366b-42d7-b04a-bfa8e4a203a8\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv" Feb 19 10:21:57 crc kubenswrapper[4811]: I0219 10:21:57.202273 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cj29\" (UniqueName: \"kubernetes.io/projected/2dfded96-366b-42d7-b04a-bfa8e4a203a8-kube-api-access-6cj29\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv\" (UID: \"2dfded96-366b-42d7-b04a-bfa8e4a203a8\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv" Feb 19 10:21:57 crc kubenswrapper[4811]: I0219 10:21:57.296683 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv" Feb 19 10:21:57 crc kubenswrapper[4811]: I0219 10:21:57.761672 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv"] Feb 19 10:21:58 crc kubenswrapper[4811]: I0219 10:21:58.070826 4811 generic.go:334] "Generic (PLEG): container finished" podID="2dfded96-366b-42d7-b04a-bfa8e4a203a8" containerID="391bb4a7814bf97267139fd11628a1a12b5ee0c75503eec7c3a9708374f05231" exitCode=0 Feb 19 10:21:58 crc kubenswrapper[4811]: I0219 10:21:58.070881 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv" event={"ID":"2dfded96-366b-42d7-b04a-bfa8e4a203a8","Type":"ContainerDied","Data":"391bb4a7814bf97267139fd11628a1a12b5ee0c75503eec7c3a9708374f05231"} Feb 19 10:21:58 crc kubenswrapper[4811]: I0219 10:21:58.071360 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv" event={"ID":"2dfded96-366b-42d7-b04a-bfa8e4a203a8","Type":"ContainerStarted","Data":"d84da53ac1027d4e9e5b91b8f4fda27791ee4561b940e93a191b47818de4b750"} Feb 19 10:22:00 crc kubenswrapper[4811]: I0219 10:22:00.038344 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-dh45w" podUID="de01cd94-d3c6-4fa5-9033-9a65a318ab82" containerName="console" containerID="cri-o://1f63ccaffb514886348a130ee9ca596458c79e06c239df9932d9661bcfb6db6b" gracePeriod=15 Feb 19 10:22:00 crc kubenswrapper[4811]: I0219 10:22:00.088229 4811 generic.go:334] "Generic (PLEG): container finished" podID="2dfded96-366b-42d7-b04a-bfa8e4a203a8" containerID="cb7453461daaeafdfbf830b984674fb0ef992eee109222bde779ac797a24d3a3" exitCode=0 Feb 19 10:22:00 crc kubenswrapper[4811]: I0219 10:22:00.088285 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv" event={"ID":"2dfded96-366b-42d7-b04a-bfa8e4a203a8","Type":"ContainerDied","Data":"cb7453461daaeafdfbf830b984674fb0ef992eee109222bde779ac797a24d3a3"} Feb 19 10:22:00 crc kubenswrapper[4811]: I0219 10:22:00.482620 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-dh45w_de01cd94-d3c6-4fa5-9033-9a65a318ab82/console/0.log" Feb 19 10:22:00 crc kubenswrapper[4811]: I0219 10:22:00.482703 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-dh45w" Feb 19 10:22:00 crc kubenswrapper[4811]: I0219 10:22:00.631571 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/de01cd94-d3c6-4fa5-9033-9a65a318ab82-console-config\") pod \"de01cd94-d3c6-4fa5-9033-9a65a318ab82\" (UID: \"de01cd94-d3c6-4fa5-9033-9a65a318ab82\") " Feb 19 10:22:00 crc kubenswrapper[4811]: I0219 10:22:00.632104 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kttl5\" (UniqueName: \"kubernetes.io/projected/de01cd94-d3c6-4fa5-9033-9a65a318ab82-kube-api-access-kttl5\") pod \"de01cd94-d3c6-4fa5-9033-9a65a318ab82\" (UID: \"de01cd94-d3c6-4fa5-9033-9a65a318ab82\") " Feb 19 10:22:00 crc kubenswrapper[4811]: I0219 10:22:00.632108 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de01cd94-d3c6-4fa5-9033-9a65a318ab82-console-config" (OuterVolumeSpecName: "console-config") pod "de01cd94-d3c6-4fa5-9033-9a65a318ab82" (UID: "de01cd94-d3c6-4fa5-9033-9a65a318ab82"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:22:00 crc kubenswrapper[4811]: I0219 10:22:00.632163 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/de01cd94-d3c6-4fa5-9033-9a65a318ab82-oauth-serving-cert\") pod \"de01cd94-d3c6-4fa5-9033-9a65a318ab82\" (UID: \"de01cd94-d3c6-4fa5-9033-9a65a318ab82\") " Feb 19 10:22:00 crc kubenswrapper[4811]: I0219 10:22:00.632242 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/de01cd94-d3c6-4fa5-9033-9a65a318ab82-console-serving-cert\") pod \"de01cd94-d3c6-4fa5-9033-9a65a318ab82\" (UID: \"de01cd94-d3c6-4fa5-9033-9a65a318ab82\") " Feb 19 10:22:00 crc kubenswrapper[4811]: I0219 10:22:00.632320 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/de01cd94-d3c6-4fa5-9033-9a65a318ab82-service-ca\") pod \"de01cd94-d3c6-4fa5-9033-9a65a318ab82\" (UID: \"de01cd94-d3c6-4fa5-9033-9a65a318ab82\") " Feb 19 10:22:00 crc kubenswrapper[4811]: I0219 10:22:00.632365 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/de01cd94-d3c6-4fa5-9033-9a65a318ab82-console-oauth-config\") pod \"de01cd94-d3c6-4fa5-9033-9a65a318ab82\" (UID: \"de01cd94-d3c6-4fa5-9033-9a65a318ab82\") " Feb 19 10:22:00 crc kubenswrapper[4811]: I0219 10:22:00.632429 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de01cd94-d3c6-4fa5-9033-9a65a318ab82-trusted-ca-bundle\") pod \"de01cd94-d3c6-4fa5-9033-9a65a318ab82\" (UID: \"de01cd94-d3c6-4fa5-9033-9a65a318ab82\") " Feb 19 10:22:00 crc kubenswrapper[4811]: I0219 10:22:00.632745 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de01cd94-d3c6-4fa5-9033-9a65a318ab82-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "de01cd94-d3c6-4fa5-9033-9a65a318ab82" (UID: "de01cd94-d3c6-4fa5-9033-9a65a318ab82"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:22:00 crc kubenswrapper[4811]: I0219 10:22:00.632799 4811 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/de01cd94-d3c6-4fa5-9033-9a65a318ab82-console-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:22:00 crc kubenswrapper[4811]: I0219 10:22:00.633245 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de01cd94-d3c6-4fa5-9033-9a65a318ab82-service-ca" (OuterVolumeSpecName: "service-ca") pod "de01cd94-d3c6-4fa5-9033-9a65a318ab82" (UID: "de01cd94-d3c6-4fa5-9033-9a65a318ab82"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:22:00 crc kubenswrapper[4811]: I0219 10:22:00.633287 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de01cd94-d3c6-4fa5-9033-9a65a318ab82-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "de01cd94-d3c6-4fa5-9033-9a65a318ab82" (UID: "de01cd94-d3c6-4fa5-9033-9a65a318ab82"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:22:00 crc kubenswrapper[4811]: I0219 10:22:00.639289 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de01cd94-d3c6-4fa5-9033-9a65a318ab82-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "de01cd94-d3c6-4fa5-9033-9a65a318ab82" (UID: "de01cd94-d3c6-4fa5-9033-9a65a318ab82"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:22:00 crc kubenswrapper[4811]: I0219 10:22:00.639348 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de01cd94-d3c6-4fa5-9033-9a65a318ab82-kube-api-access-kttl5" (OuterVolumeSpecName: "kube-api-access-kttl5") pod "de01cd94-d3c6-4fa5-9033-9a65a318ab82" (UID: "de01cd94-d3c6-4fa5-9033-9a65a318ab82"). InnerVolumeSpecName "kube-api-access-kttl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:22:00 crc kubenswrapper[4811]: I0219 10:22:00.639798 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de01cd94-d3c6-4fa5-9033-9a65a318ab82-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "de01cd94-d3c6-4fa5-9033-9a65a318ab82" (UID: "de01cd94-d3c6-4fa5-9033-9a65a318ab82"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:22:00 crc kubenswrapper[4811]: I0219 10:22:00.733738 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kttl5\" (UniqueName: \"kubernetes.io/projected/de01cd94-d3c6-4fa5-9033-9a65a318ab82-kube-api-access-kttl5\") on node \"crc\" DevicePath \"\"" Feb 19 10:22:00 crc kubenswrapper[4811]: I0219 10:22:00.733783 4811 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/de01cd94-d3c6-4fa5-9033-9a65a318ab82-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 10:22:00 crc kubenswrapper[4811]: I0219 10:22:00.733792 4811 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/de01cd94-d3c6-4fa5-9033-9a65a318ab82-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 10:22:00 crc kubenswrapper[4811]: I0219 10:22:00.733800 4811 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/de01cd94-d3c6-4fa5-9033-9a65a318ab82-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 10:22:00 crc kubenswrapper[4811]: I0219 10:22:00.733809 4811 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/de01cd94-d3c6-4fa5-9033-9a65a318ab82-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:22:00 crc kubenswrapper[4811]: I0219 10:22:00.733817 4811 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de01cd94-d3c6-4fa5-9033-9a65a318ab82-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:22:01 crc kubenswrapper[4811]: I0219 10:22:01.096936 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-dh45w_de01cd94-d3c6-4fa5-9033-9a65a318ab82/console/0.log" Feb 19 10:22:01 crc kubenswrapper[4811]: I0219 10:22:01.097062 4811 generic.go:334] "Generic (PLEG): container finished" podID="de01cd94-d3c6-4fa5-9033-9a65a318ab82" containerID="1f63ccaffb514886348a130ee9ca596458c79e06c239df9932d9661bcfb6db6b" exitCode=2 Feb 19 10:22:01 crc kubenswrapper[4811]: I0219 10:22:01.097159 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-dh45w" Feb 19 10:22:01 crc kubenswrapper[4811]: I0219 10:22:01.097215 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-dh45w" event={"ID":"de01cd94-d3c6-4fa5-9033-9a65a318ab82","Type":"ContainerDied","Data":"1f63ccaffb514886348a130ee9ca596458c79e06c239df9932d9661bcfb6db6b"} Feb 19 10:22:01 crc kubenswrapper[4811]: I0219 10:22:01.097290 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-dh45w" event={"ID":"de01cd94-d3c6-4fa5-9033-9a65a318ab82","Type":"ContainerDied","Data":"f3b290e0ef18510fc5fea8cefdbed8b9af613450e682d03c358f3ed17fd9925b"} Feb 19 10:22:01 crc kubenswrapper[4811]: I0219 10:22:01.097322 4811 scope.go:117] "RemoveContainer" containerID="1f63ccaffb514886348a130ee9ca596458c79e06c239df9932d9661bcfb6db6b" Feb 19 10:22:01 crc kubenswrapper[4811]: I0219 10:22:01.100773 4811 generic.go:334] "Generic (PLEG): container finished" podID="2dfded96-366b-42d7-b04a-bfa8e4a203a8" containerID="0f21d59eecf54246967f5abd8f79294ccaf48cb2cacc70ddf9f81e478943750c" exitCode=0 Feb 19 10:22:01 crc kubenswrapper[4811]: I0219 10:22:01.100847 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv" event={"ID":"2dfded96-366b-42d7-b04a-bfa8e4a203a8","Type":"ContainerDied","Data":"0f21d59eecf54246967f5abd8f79294ccaf48cb2cacc70ddf9f81e478943750c"} Feb 19 10:22:01 crc kubenswrapper[4811]: I0219 10:22:01.121338 4811 scope.go:117] "RemoveContainer" containerID="1f63ccaffb514886348a130ee9ca596458c79e06c239df9932d9661bcfb6db6b" Feb 19 10:22:01 crc kubenswrapper[4811]: E0219 10:22:01.122506 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f63ccaffb514886348a130ee9ca596458c79e06c239df9932d9661bcfb6db6b\": container with ID starting with 1f63ccaffb514886348a130ee9ca596458c79e06c239df9932d9661bcfb6db6b not found: ID does not exist" containerID="1f63ccaffb514886348a130ee9ca596458c79e06c239df9932d9661bcfb6db6b" Feb 19 10:22:01 crc kubenswrapper[4811]: I0219 10:22:01.122747 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f63ccaffb514886348a130ee9ca596458c79e06c239df9932d9661bcfb6db6b"} err="failed to get container status \"1f63ccaffb514886348a130ee9ca596458c79e06c239df9932d9661bcfb6db6b\": rpc error: code = NotFound desc = could not find container \"1f63ccaffb514886348a130ee9ca596458c79e06c239df9932d9661bcfb6db6b\": container with ID starting with 1f63ccaffb514886348a130ee9ca596458c79e06c239df9932d9661bcfb6db6b not found: ID does not exist" Feb 19 10:22:01 crc kubenswrapper[4811]: I0219 10:22:01.148766 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-dh45w"] Feb 19 10:22:01 crc kubenswrapper[4811]: I0219 10:22:01.156765 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-dh45w"] Feb 19 10:22:02 crc kubenswrapper[4811]: I0219 10:22:02.452275 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv" Feb 19 10:22:02 crc kubenswrapper[4811]: I0219 10:22:02.488988 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2dfded96-366b-42d7-b04a-bfa8e4a203a8-util\") pod \"2dfded96-366b-42d7-b04a-bfa8e4a203a8\" (UID: \"2dfded96-366b-42d7-b04a-bfa8e4a203a8\") " Feb 19 10:22:02 crc kubenswrapper[4811]: I0219 10:22:02.489137 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cj29\" (UniqueName: \"kubernetes.io/projected/2dfded96-366b-42d7-b04a-bfa8e4a203a8-kube-api-access-6cj29\") pod \"2dfded96-366b-42d7-b04a-bfa8e4a203a8\" (UID: \"2dfded96-366b-42d7-b04a-bfa8e4a203a8\") " Feb 19 10:22:02 crc kubenswrapper[4811]: I0219 10:22:02.489217 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2dfded96-366b-42d7-b04a-bfa8e4a203a8-bundle\") pod \"2dfded96-366b-42d7-b04a-bfa8e4a203a8\" (UID: \"2dfded96-366b-42d7-b04a-bfa8e4a203a8\") " Feb 19 10:22:02 crc kubenswrapper[4811]: I0219 10:22:02.491076 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dfded96-366b-42d7-b04a-bfa8e4a203a8-bundle" (OuterVolumeSpecName: "bundle") pod "2dfded96-366b-42d7-b04a-bfa8e4a203a8" (UID: "2dfded96-366b-42d7-b04a-bfa8e4a203a8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:22:02 crc kubenswrapper[4811]: I0219 10:22:02.496372 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dfded96-366b-42d7-b04a-bfa8e4a203a8-kube-api-access-6cj29" (OuterVolumeSpecName: "kube-api-access-6cj29") pod "2dfded96-366b-42d7-b04a-bfa8e4a203a8" (UID: "2dfded96-366b-42d7-b04a-bfa8e4a203a8"). InnerVolumeSpecName "kube-api-access-6cj29". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:22:02 crc kubenswrapper[4811]: I0219 10:22:02.519574 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dfded96-366b-42d7-b04a-bfa8e4a203a8-util" (OuterVolumeSpecName: "util") pod "2dfded96-366b-42d7-b04a-bfa8e4a203a8" (UID: "2dfded96-366b-42d7-b04a-bfa8e4a203a8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:22:02 crc kubenswrapper[4811]: I0219 10:22:02.591534 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cj29\" (UniqueName: \"kubernetes.io/projected/2dfded96-366b-42d7-b04a-bfa8e4a203a8-kube-api-access-6cj29\") on node \"crc\" DevicePath \"\"" Feb 19 10:22:02 crc kubenswrapper[4811]: I0219 10:22:02.591601 4811 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2dfded96-366b-42d7-b04a-bfa8e4a203a8-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:22:02 crc kubenswrapper[4811]: I0219 10:22:02.591635 4811 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2dfded96-366b-42d7-b04a-bfa8e4a203a8-util\") on node \"crc\" DevicePath \"\"" Feb 19 10:22:02 crc kubenswrapper[4811]: I0219 10:22:02.605630 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de01cd94-d3c6-4fa5-9033-9a65a318ab82" path="/var/lib/kubelet/pods/de01cd94-d3c6-4fa5-9033-9a65a318ab82/volumes" Feb 19 10:22:03 crc kubenswrapper[4811]: I0219 10:22:03.120582 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv" event={"ID":"2dfded96-366b-42d7-b04a-bfa8e4a203a8","Type":"ContainerDied","Data":"d84da53ac1027d4e9e5b91b8f4fda27791ee4561b940e93a191b47818de4b750"} Feb 19 10:22:03 crc kubenswrapper[4811]: I0219 10:22:03.120648 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d84da53ac1027d4e9e5b91b8f4fda27791ee4561b940e93a191b47818de4b750" Feb 19 10:22:03 crc kubenswrapper[4811]: I0219 10:22:03.120740 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv" Feb 19 10:22:11 crc kubenswrapper[4811]: I0219 10:22:11.848672 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6b977f8b79-xtsfv"] Feb 19 10:22:11 crc kubenswrapper[4811]: E0219 10:22:11.849468 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de01cd94-d3c6-4fa5-9033-9a65a318ab82" containerName="console" Feb 19 10:22:11 crc kubenswrapper[4811]: I0219 10:22:11.849485 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="de01cd94-d3c6-4fa5-9033-9a65a318ab82" containerName="console" Feb 19 10:22:11 crc kubenswrapper[4811]: E0219 10:22:11.849505 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dfded96-366b-42d7-b04a-bfa8e4a203a8" containerName="extract" Feb 19 10:22:11 crc kubenswrapper[4811]: I0219 10:22:11.849512 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dfded96-366b-42d7-b04a-bfa8e4a203a8" containerName="extract" Feb 19 10:22:11 crc kubenswrapper[4811]: E0219 10:22:11.849529 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dfded96-366b-42d7-b04a-bfa8e4a203a8" containerName="util" Feb 19 10:22:11 crc kubenswrapper[4811]: I0219 10:22:11.849536 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dfded96-366b-42d7-b04a-bfa8e4a203a8" containerName="util" Feb 19 10:22:11 crc kubenswrapper[4811]: E0219 10:22:11.849547 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dfded96-366b-42d7-b04a-bfa8e4a203a8" containerName="pull" Feb 19 10:22:11 crc kubenswrapper[4811]: I0219 10:22:11.849555 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dfded96-366b-42d7-b04a-bfa8e4a203a8" containerName="pull" Feb 19 10:22:11 crc kubenswrapper[4811]: I0219 10:22:11.849675 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="de01cd94-d3c6-4fa5-9033-9a65a318ab82" containerName="console" Feb 19 10:22:11 crc kubenswrapper[4811]: I0219 10:22:11.849692 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dfded96-366b-42d7-b04a-bfa8e4a203a8" containerName="extract" Feb 19 10:22:11 crc kubenswrapper[4811]: I0219 10:22:11.850100 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6b977f8b79-xtsfv" Feb 19 10:22:11 crc kubenswrapper[4811]: I0219 10:22:11.852138 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 19 10:22:11 crc kubenswrapper[4811]: I0219 10:22:11.852412 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 19 10:22:11 crc kubenswrapper[4811]: I0219 10:22:11.853035 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 19 10:22:11 crc kubenswrapper[4811]: I0219 10:22:11.853427 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-ht4b6" Feb 19 10:22:11 crc kubenswrapper[4811]: I0219 10:22:11.854067 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 19 10:22:11 crc kubenswrapper[4811]: I0219 10:22:11.875181 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6b977f8b79-xtsfv"] Feb 19 10:22:12 crc kubenswrapper[4811]: I0219 10:22:12.009011 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbbtq\" (UniqueName: \"kubernetes.io/projected/b4c117aa-5a81-4f2a-88d0-de69a1da3c84-kube-api-access-pbbtq\") pod \"metallb-operator-controller-manager-6b977f8b79-xtsfv\" (UID: \"b4c117aa-5a81-4f2a-88d0-de69a1da3c84\") " pod="metallb-system/metallb-operator-controller-manager-6b977f8b79-xtsfv" Feb 19 10:22:12 crc kubenswrapper[4811]: I0219 10:22:12.009570 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b4c117aa-5a81-4f2a-88d0-de69a1da3c84-apiservice-cert\") pod \"metallb-operator-controller-manager-6b977f8b79-xtsfv\" (UID: \"b4c117aa-5a81-4f2a-88d0-de69a1da3c84\") " pod="metallb-system/metallb-operator-controller-manager-6b977f8b79-xtsfv" Feb 19 10:22:12 crc kubenswrapper[4811]: I0219 10:22:12.009658 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b4c117aa-5a81-4f2a-88d0-de69a1da3c84-webhook-cert\") pod \"metallb-operator-controller-manager-6b977f8b79-xtsfv\" (UID: \"b4c117aa-5a81-4f2a-88d0-de69a1da3c84\") " pod="metallb-system/metallb-operator-controller-manager-6b977f8b79-xtsfv" Feb 19 10:22:12 crc kubenswrapper[4811]: I0219 10:22:12.111487 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbbtq\" (UniqueName: \"kubernetes.io/projected/b4c117aa-5a81-4f2a-88d0-de69a1da3c84-kube-api-access-pbbtq\") pod \"metallb-operator-controller-manager-6b977f8b79-xtsfv\" (UID: \"b4c117aa-5a81-4f2a-88d0-de69a1da3c84\") " pod="metallb-system/metallb-operator-controller-manager-6b977f8b79-xtsfv" Feb 19 10:22:12 crc kubenswrapper[4811]: I0219 10:22:12.111536 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b4c117aa-5a81-4f2a-88d0-de69a1da3c84-apiservice-cert\") pod \"metallb-operator-controller-manager-6b977f8b79-xtsfv\" (UID: \"b4c117aa-5a81-4f2a-88d0-de69a1da3c84\") " pod="metallb-system/metallb-operator-controller-manager-6b977f8b79-xtsfv" Feb 19 10:22:12 crc kubenswrapper[4811]: I0219 10:22:12.111564 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b4c117aa-5a81-4f2a-88d0-de69a1da3c84-webhook-cert\") pod \"metallb-operator-controller-manager-6b977f8b79-xtsfv\" (UID: \"b4c117aa-5a81-4f2a-88d0-de69a1da3c84\") " pod="metallb-system/metallb-operator-controller-manager-6b977f8b79-xtsfv" Feb 19 10:22:12 crc kubenswrapper[4811]: I0219 10:22:12.117071 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b4c117aa-5a81-4f2a-88d0-de69a1da3c84-webhook-cert\") pod \"metallb-operator-controller-manager-6b977f8b79-xtsfv\" (UID: \"b4c117aa-5a81-4f2a-88d0-de69a1da3c84\") " pod="metallb-system/metallb-operator-controller-manager-6b977f8b79-xtsfv" Feb 19 10:22:12 crc kubenswrapper[4811]: I0219 10:22:12.117691 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b4c117aa-5a81-4f2a-88d0-de69a1da3c84-apiservice-cert\") pod \"metallb-operator-controller-manager-6b977f8b79-xtsfv\" (UID: \"b4c117aa-5a81-4f2a-88d0-de69a1da3c84\") " pod="metallb-system/metallb-operator-controller-manager-6b977f8b79-xtsfv" Feb 19 10:22:12 crc kubenswrapper[4811]: I0219 10:22:12.126907 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbbtq\" (UniqueName: \"kubernetes.io/projected/b4c117aa-5a81-4f2a-88d0-de69a1da3c84-kube-api-access-pbbtq\") pod \"metallb-operator-controller-manager-6b977f8b79-xtsfv\" (UID: \"b4c117aa-5a81-4f2a-88d0-de69a1da3c84\") " pod="metallb-system/metallb-operator-controller-manager-6b977f8b79-xtsfv" Feb 19 10:22:12 crc kubenswrapper[4811]: I0219 10:22:12.166643 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6b977f8b79-xtsfv" Feb 19 10:22:12 crc kubenswrapper[4811]: I0219 10:22:12.207828 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-c47dd88fc-mx6n7"] Feb 19 10:22:12 crc kubenswrapper[4811]: I0219 10:22:12.208763 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-c47dd88fc-mx6n7" Feb 19 10:22:12 crc kubenswrapper[4811]: I0219 10:22:12.212598 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-8tgxh" Feb 19 10:22:12 crc kubenswrapper[4811]: I0219 10:22:12.212844 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 19 10:22:12 crc kubenswrapper[4811]: I0219 10:22:12.212994 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 19 10:22:12 crc kubenswrapper[4811]: I0219 10:22:12.222778 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-c47dd88fc-mx6n7"] Feb 19 10:22:12 crc kubenswrapper[4811]: I0219 10:22:12.313296 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sdlc\" (UniqueName: \"kubernetes.io/projected/3647a1c7-33fe-440f-b648-ace58f1deb0d-kube-api-access-8sdlc\") pod \"metallb-operator-webhook-server-c47dd88fc-mx6n7\" (UID: \"3647a1c7-33fe-440f-b648-ace58f1deb0d\") " pod="metallb-system/metallb-operator-webhook-server-c47dd88fc-mx6n7" Feb 19 10:22:12 crc kubenswrapper[4811]: I0219 10:22:12.313355 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3647a1c7-33fe-440f-b648-ace58f1deb0d-apiservice-cert\") pod \"metallb-operator-webhook-server-c47dd88fc-mx6n7\" (UID: \"3647a1c7-33fe-440f-b648-ace58f1deb0d\") " pod="metallb-system/metallb-operator-webhook-server-c47dd88fc-mx6n7" Feb 19 10:22:12 crc kubenswrapper[4811]: I0219 10:22:12.313389 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3647a1c7-33fe-440f-b648-ace58f1deb0d-webhook-cert\") pod \"metallb-operator-webhook-server-c47dd88fc-mx6n7\" (UID: \"3647a1c7-33fe-440f-b648-ace58f1deb0d\") " pod="metallb-system/metallb-operator-webhook-server-c47dd88fc-mx6n7" Feb 19 10:22:12 crc kubenswrapper[4811]: I0219 10:22:12.414801 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3647a1c7-33fe-440f-b648-ace58f1deb0d-apiservice-cert\") pod \"metallb-operator-webhook-server-c47dd88fc-mx6n7\" (UID: \"3647a1c7-33fe-440f-b648-ace58f1deb0d\") " pod="metallb-system/metallb-operator-webhook-server-c47dd88fc-mx6n7" Feb 19 10:22:12 crc kubenswrapper[4811]: I0219 10:22:12.414859 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3647a1c7-33fe-440f-b648-ace58f1deb0d-webhook-cert\") pod \"metallb-operator-webhook-server-c47dd88fc-mx6n7\" (UID: \"3647a1c7-33fe-440f-b648-ace58f1deb0d\") " pod="metallb-system/metallb-operator-webhook-server-c47dd88fc-mx6n7" Feb 19 10:22:12 crc kubenswrapper[4811]: I0219 10:22:12.414943 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sdlc\" (UniqueName: \"kubernetes.io/projected/3647a1c7-33fe-440f-b648-ace58f1deb0d-kube-api-access-8sdlc\") pod \"metallb-operator-webhook-server-c47dd88fc-mx6n7\" (UID: \"3647a1c7-33fe-440f-b648-ace58f1deb0d\") " pod="metallb-system/metallb-operator-webhook-server-c47dd88fc-mx6n7" Feb 19 10:22:12 crc kubenswrapper[4811]: I0219 10:22:12.421968 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3647a1c7-33fe-440f-b648-ace58f1deb0d-webhook-cert\") pod \"metallb-operator-webhook-server-c47dd88fc-mx6n7\" (UID: \"3647a1c7-33fe-440f-b648-ace58f1deb0d\") " pod="metallb-system/metallb-operator-webhook-server-c47dd88fc-mx6n7" Feb 19 10:22:12 crc kubenswrapper[4811]: I0219 10:22:12.422127 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3647a1c7-33fe-440f-b648-ace58f1deb0d-apiservice-cert\") pod \"metallb-operator-webhook-server-c47dd88fc-mx6n7\" (UID: \"3647a1c7-33fe-440f-b648-ace58f1deb0d\") " pod="metallb-system/metallb-operator-webhook-server-c47dd88fc-mx6n7" Feb 19 10:22:12 crc kubenswrapper[4811]: I0219 10:22:12.434881 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sdlc\" (UniqueName: \"kubernetes.io/projected/3647a1c7-33fe-440f-b648-ace58f1deb0d-kube-api-access-8sdlc\") pod \"metallb-operator-webhook-server-c47dd88fc-mx6n7\" (UID: \"3647a1c7-33fe-440f-b648-ace58f1deb0d\") " pod="metallb-system/metallb-operator-webhook-server-c47dd88fc-mx6n7" Feb 19 10:22:12 crc kubenswrapper[4811]: I0219 10:22:12.459121 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6b977f8b79-xtsfv"] Feb 19 10:22:12 crc kubenswrapper[4811]: I0219 10:22:12.568767 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-c47dd88fc-mx6n7" Feb 19 10:22:12 crc kubenswrapper[4811]: I0219 10:22:12.845912 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-c47dd88fc-mx6n7"] Feb 19 10:22:12 crc kubenswrapper[4811]: W0219 10:22:12.849635 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3647a1c7_33fe_440f_b648_ace58f1deb0d.slice/crio-c26a4a2a651e87251e706eb13435cf4086b612be35a74e8bde352857caae6175 WatchSource:0}: Error finding container c26a4a2a651e87251e706eb13435cf4086b612be35a74e8bde352857caae6175: Status 404 returned error can't find the container with id c26a4a2a651e87251e706eb13435cf4086b612be35a74e8bde352857caae6175 Feb 19 10:22:13 crc kubenswrapper[4811]: I0219 10:22:13.176531 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6b977f8b79-xtsfv" event={"ID":"b4c117aa-5a81-4f2a-88d0-de69a1da3c84","Type":"ContainerStarted","Data":"c0440bbe5773a5bd1918719a0663abbba8983a9933e570e108105755725bd695"} Feb 19 10:22:13 crc kubenswrapper[4811]: I0219 10:22:13.177601 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-c47dd88fc-mx6n7" event={"ID":"3647a1c7-33fe-440f-b648-ace58f1deb0d","Type":"ContainerStarted","Data":"c26a4a2a651e87251e706eb13435cf4086b612be35a74e8bde352857caae6175"} Feb 19 10:22:16 crc kubenswrapper[4811]: I0219 10:22:16.203233 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6b977f8b79-xtsfv" event={"ID":"b4c117aa-5a81-4f2a-88d0-de69a1da3c84","Type":"ContainerStarted","Data":"95125e88f815522915f977856d2ecd67497bb346415555a8129aa7d78a87d128"} Feb 19 10:22:16 crc kubenswrapper[4811]: I0219 10:22:16.203806 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6b977f8b79-xtsfv" Feb 19 10:22:16 crc kubenswrapper[4811]: I0219 10:22:16.235802 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6b977f8b79-xtsfv" podStartSLOduration=2.208421907 podStartE2EDuration="5.23574411s" podCreationTimestamp="2026-02-19 10:22:11 +0000 UTC" firstStartedPulling="2026-02-19 10:22:12.466961081 +0000 UTC m=+820.993425767" lastFinishedPulling="2026-02-19 10:22:15.494283284 +0000 UTC m=+824.020747970" observedRunningTime="2026-02-19 10:22:16.228906253 +0000 UTC m=+824.755370939" watchObservedRunningTime="2026-02-19 10:22:16.23574411 +0000 UTC m=+824.762208806" Feb 19 10:22:17 crc kubenswrapper[4811]: I0219 10:22:17.222304 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-c47dd88fc-mx6n7" event={"ID":"3647a1c7-33fe-440f-b648-ace58f1deb0d","Type":"ContainerStarted","Data":"7fdcf91b856ed53d979e3506add901613b92ddc180c30d04bf9c18a906ad0da8"} Feb 19 10:22:17 crc kubenswrapper[4811]: I0219 10:22:17.249491 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-c47dd88fc-mx6n7" podStartSLOduration=1.128223127 podStartE2EDuration="5.24947148s" podCreationTimestamp="2026-02-19 10:22:12 +0000 UTC" firstStartedPulling="2026-02-19 10:22:12.855705837 +0000 UTC m=+821.382170523" lastFinishedPulling="2026-02-19 10:22:16.97695419 +0000 UTC m=+825.503418876" observedRunningTime="2026-02-19 10:22:17.246136104 +0000 UTC m=+825.772600790" watchObservedRunningTime="2026-02-19 10:22:17.24947148 +0000 UTC m=+825.775936166" Feb 19 10:22:18 crc kubenswrapper[4811]: I0219 10:22:18.228812 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-c47dd88fc-mx6n7" Feb 19 10:22:26 crc kubenswrapper[4811]: I0219 10:22:26.788566 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:22:26 crc kubenswrapper[4811]: I0219 10:22:26.789598 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:22:26 crc kubenswrapper[4811]: I0219 10:22:26.789663 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" Feb 19 10:22:26 crc kubenswrapper[4811]: I0219 10:22:26.790624 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"193ac12085ccacd5c18931cdeb481fcee9513ec65e0023789373cfc6e16223d8"} pod="openshift-machine-config-operator/machine-config-daemon-v97zm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:22:26 crc kubenswrapper[4811]: I0219 10:22:26.790698 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" containerID="cri-o://193ac12085ccacd5c18931cdeb481fcee9513ec65e0023789373cfc6e16223d8" gracePeriod=600 Feb 19 10:22:27 crc kubenswrapper[4811]: I0219 10:22:27.292676 4811 generic.go:334] "Generic (PLEG): container finished" podID="e2e42c80-bece-4066-abad-05bc661d33f5" containerID="193ac12085ccacd5c18931cdeb481fcee9513ec65e0023789373cfc6e16223d8" exitCode=0 Feb 19 10:22:27 crc kubenswrapper[4811]: I0219 10:22:27.292731 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" event={"ID":"e2e42c80-bece-4066-abad-05bc661d33f5","Type":"ContainerDied","Data":"193ac12085ccacd5c18931cdeb481fcee9513ec65e0023789373cfc6e16223d8"} Feb 19 10:22:27 crc kubenswrapper[4811]: I0219 10:22:27.292760 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" event={"ID":"e2e42c80-bece-4066-abad-05bc661d33f5","Type":"ContainerStarted","Data":"861076d931739c868265cf3b64ebb02f51f96fc8b2fc1a6337148664ae662a28"} Feb 19 10:22:27 crc kubenswrapper[4811]: I0219 10:22:27.292778 4811 scope.go:117] "RemoveContainer" containerID="726804576e37e57d9029714747ad105ed4b9c390bdf00d5b9989e36faf5e4955" Feb 19 10:22:32 crc kubenswrapper[4811]: I0219 10:22:32.583060 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-c47dd88fc-mx6n7" Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.169787 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6b977f8b79-xtsfv" Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.835512 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-ngn4k"] Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.839524 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-ngn4k" Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.844788 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.845321 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-fx4zj"] Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.844825 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-4rgp5" Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.845722 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.846327 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fx4zj" Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.848001 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.857129 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-fx4zj"] Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.884249 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz5v6\" (UniqueName: \"kubernetes.io/projected/5742d3da-f213-4a03-a18d-55cbe88d82b5-kube-api-access-wz5v6\") pod \"frr-k8s-webhook-server-78b44bf5bb-fx4zj\" (UID: \"5742d3da-f213-4a03-a18d-55cbe88d82b5\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fx4zj" Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.884303 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5742d3da-f213-4a03-a18d-55cbe88d82b5-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-fx4zj\" (UID: \"5742d3da-f213-4a03-a18d-55cbe88d82b5\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fx4zj" Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.884325 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/84b9332e-fb14-4b7e-844e-d902809b4e4c-frr-startup\") pod \"frr-k8s-ngn4k\" (UID: \"84b9332e-fb14-4b7e-844e-d902809b4e4c\") " pod="metallb-system/frr-k8s-ngn4k" Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.884366 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/84b9332e-fb14-4b7e-844e-d902809b4e4c-reloader\") pod \"frr-k8s-ngn4k\" (UID: \"84b9332e-fb14-4b7e-844e-d902809b4e4c\") " pod="metallb-system/frr-k8s-ngn4k" Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.884383 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/84b9332e-fb14-4b7e-844e-d902809b4e4c-metrics\") pod \"frr-k8s-ngn4k\" (UID: \"84b9332e-fb14-4b7e-844e-d902809b4e4c\") " pod="metallb-system/frr-k8s-ngn4k" Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.884400 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kclv\" (UniqueName: \"kubernetes.io/projected/84b9332e-fb14-4b7e-844e-d902809b4e4c-kube-api-access-4kclv\") pod \"frr-k8s-ngn4k\" (UID: \"84b9332e-fb14-4b7e-844e-d902809b4e4c\") " pod="metallb-system/frr-k8s-ngn4k" Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.884419 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84b9332e-fb14-4b7e-844e-d902809b4e4c-metrics-certs\") pod \"frr-k8s-ngn4k\" (UID: \"84b9332e-fb14-4b7e-844e-d902809b4e4c\") " pod="metallb-system/frr-k8s-ngn4k" Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.884460 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/84b9332e-fb14-4b7e-844e-d902809b4e4c-frr-sockets\") pod \"frr-k8s-ngn4k\" (UID: \"84b9332e-fb14-4b7e-844e-d902809b4e4c\") " pod="metallb-system/frr-k8s-ngn4k" Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.884483 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/84b9332e-fb14-4b7e-844e-d902809b4e4c-frr-conf\") pod \"frr-k8s-ngn4k\" (UID: \"84b9332e-fb14-4b7e-844e-d902809b4e4c\") " pod="metallb-system/frr-k8s-ngn4k" Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.972790 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-jswp7"] Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.973795 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-jswp7" Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.981168 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.981981 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-tccfs" Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.982174 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.982408 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.986638 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-7sw8d"] Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.987518 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kclv\" (UniqueName: \"kubernetes.io/projected/84b9332e-fb14-4b7e-844e-d902809b4e4c-kube-api-access-4kclv\") pod \"frr-k8s-ngn4k\" (UID: \"84b9332e-fb14-4b7e-844e-d902809b4e4c\") " pod="metallb-system/frr-k8s-ngn4k" Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.987557 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84b9332e-fb14-4b7e-844e-d902809b4e4c-metrics-certs\") pod \"frr-k8s-ngn4k\" (UID: \"84b9332e-fb14-4b7e-844e-d902809b4e4c\") " pod="metallb-system/frr-k8s-ngn4k" Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.987582 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461-metrics-certs\") pod \"speaker-jswp7\" (UID: \"4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461\") " pod="metallb-system/speaker-jswp7" Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.987611 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/84b9332e-fb14-4b7e-844e-d902809b4e4c-frr-sockets\") pod \"frr-k8s-ngn4k\" (UID: \"84b9332e-fb14-4b7e-844e-d902809b4e4c\") " pod="metallb-system/frr-k8s-ngn4k" Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.987620 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-7sw8d" Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.987632 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/84b9332e-fb14-4b7e-844e-d902809b4e4c-frr-conf\") pod \"frr-k8s-ngn4k\" (UID: \"84b9332e-fb14-4b7e-844e-d902809b4e4c\") " pod="metallb-system/frr-k8s-ngn4k" Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.987666 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv9w5\" (UniqueName: \"kubernetes.io/projected/4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461-kube-api-access-nv9w5\") pod \"speaker-jswp7\" (UID: \"4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461\") " pod="metallb-system/speaker-jswp7" Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.987709 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461-metallb-excludel2\") pod \"speaker-jswp7\" (UID: \"4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461\") " pod="metallb-system/speaker-jswp7" Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.987733 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz5v6\" (UniqueName: \"kubernetes.io/projected/5742d3da-f213-4a03-a18d-55cbe88d82b5-kube-api-access-wz5v6\") pod \"frr-k8s-webhook-server-78b44bf5bb-fx4zj\" (UID: \"5742d3da-f213-4a03-a18d-55cbe88d82b5\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fx4zj" Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.987750 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5742d3da-f213-4a03-a18d-55cbe88d82b5-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-fx4zj\" (UID: \"5742d3da-f213-4a03-a18d-55cbe88d82b5\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fx4zj" Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.987767 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/84b9332e-fb14-4b7e-844e-d902809b4e4c-frr-startup\") pod \"frr-k8s-ngn4k\" (UID: \"84b9332e-fb14-4b7e-844e-d902809b4e4c\") " pod="metallb-system/frr-k8s-ngn4k" Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.987786 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461-memberlist\") pod \"speaker-jswp7\" (UID: \"4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461\") " pod="metallb-system/speaker-jswp7" Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.987804 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/84b9332e-fb14-4b7e-844e-d902809b4e4c-reloader\") pod \"frr-k8s-ngn4k\" (UID: \"84b9332e-fb14-4b7e-844e-d902809b4e4c\") " pod="metallb-system/frr-k8s-ngn4k" Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.987823 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/84b9332e-fb14-4b7e-844e-d902809b4e4c-metrics\") pod \"frr-k8s-ngn4k\" (UID: \"84b9332e-fb14-4b7e-844e-d902809b4e4c\") " pod="metallb-system/frr-k8s-ngn4k" Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.988271 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/84b9332e-fb14-4b7e-844e-d902809b4e4c-metrics\") pod \"frr-k8s-ngn4k\" (UID: \"84b9332e-fb14-4b7e-844e-d902809b4e4c\") " pod="metallb-system/frr-k8s-ngn4k" Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.989400 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/84b9332e-fb14-4b7e-844e-d902809b4e4c-frr-sockets\") pod \"frr-k8s-ngn4k\" (UID: \"84b9332e-fb14-4b7e-844e-d902809b4e4c\") " pod="metallb-system/frr-k8s-ngn4k" Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.989631 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/84b9332e-fb14-4b7e-844e-d902809b4e4c-frr-conf\") pod \"frr-k8s-ngn4k\" (UID: \"84b9332e-fb14-4b7e-844e-d902809b4e4c\") " pod="metallb-system/frr-k8s-ngn4k" Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.990344 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/84b9332e-fb14-4b7e-844e-d902809b4e4c-frr-startup\") pod \"frr-k8s-ngn4k\" (UID: \"84b9332e-fb14-4b7e-844e-d902809b4e4c\") " pod="metallb-system/frr-k8s-ngn4k" Feb 19 10:22:52 crc kubenswrapper[4811]: I0219 10:22:52.990604 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/84b9332e-fb14-4b7e-844e-d902809b4e4c-reloader\") pod \"frr-k8s-ngn4k\" (UID: \"84b9332e-fb14-4b7e-844e-d902809b4e4c\") " pod="metallb-system/frr-k8s-ngn4k" Feb 19 10:22:53 crc kubenswrapper[4811]: I0219 10:22:53.003604 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5742d3da-f213-4a03-a18d-55cbe88d82b5-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-fx4zj\" (UID: \"5742d3da-f213-4a03-a18d-55cbe88d82b5\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fx4zj" Feb 19 10:22:53 crc kubenswrapper[4811]: I0219 10:22:53.003812 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 19 10:22:53 crc kubenswrapper[4811]: I0219 10:22:53.018353 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kclv\" (UniqueName: \"kubernetes.io/projected/84b9332e-fb14-4b7e-844e-d902809b4e4c-kube-api-access-4kclv\") pod \"frr-k8s-ngn4k\" (UID: \"84b9332e-fb14-4b7e-844e-d902809b4e4c\") " pod="metallb-system/frr-k8s-ngn4k" Feb 19 10:22:53 crc kubenswrapper[4811]: I0219 10:22:53.018752 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84b9332e-fb14-4b7e-844e-d902809b4e4c-metrics-certs\") pod \"frr-k8s-ngn4k\" (UID: \"84b9332e-fb14-4b7e-844e-d902809b4e4c\") " pod="metallb-system/frr-k8s-ngn4k" Feb 19 10:22:53 crc kubenswrapper[4811]: I0219 10:22:53.020314 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz5v6\" (UniqueName: \"kubernetes.io/projected/5742d3da-f213-4a03-a18d-55cbe88d82b5-kube-api-access-wz5v6\") pod \"frr-k8s-webhook-server-78b44bf5bb-fx4zj\" (UID: \"5742d3da-f213-4a03-a18d-55cbe88d82b5\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fx4zj" Feb 19 10:22:53 crc kubenswrapper[4811]: I0219 10:22:53.032591 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-7sw8d"] Feb 19 10:22:53 crc kubenswrapper[4811]: I0219 10:22:53.088227 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461-memberlist\") pod \"speaker-jswp7\" (UID: \"4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461\") " pod="metallb-system/speaker-jswp7" Feb 19 10:22:53 crc kubenswrapper[4811]: I0219 10:22:53.088279 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rgmc\" (UniqueName: \"kubernetes.io/projected/9eec6b61-1b8d-4615-8333-d93630c46ce2-kube-api-access-4rgmc\") pod \"controller-69bbfbf88f-7sw8d\" (UID: \"9eec6b61-1b8d-4615-8333-d93630c46ce2\") " pod="metallb-system/controller-69bbfbf88f-7sw8d" Feb 19 10:22:53 crc kubenswrapper[4811]: I0219 10:22:53.088313 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9eec6b61-1b8d-4615-8333-d93630c46ce2-cert\") pod \"controller-69bbfbf88f-7sw8d\" (UID: \"9eec6b61-1b8d-4615-8333-d93630c46ce2\") " pod="metallb-system/controller-69bbfbf88f-7sw8d" Feb 19 10:22:53 crc kubenswrapper[4811]: I0219 10:22:53.088337 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461-metrics-certs\") pod \"speaker-jswp7\" (UID: \"4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461\") " pod="metallb-system/speaker-jswp7" Feb 19 10:22:53 crc kubenswrapper[4811]: I0219 10:22:53.088393 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv9w5\" (UniqueName: \"kubernetes.io/projected/4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461-kube-api-access-nv9w5\") pod \"speaker-jswp7\" (UID: \"4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461\") " pod="metallb-system/speaker-jswp7" Feb 19 10:22:53 crc kubenswrapper[4811]: I0219 10:22:53.088429 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9eec6b61-1b8d-4615-8333-d93630c46ce2-metrics-certs\") pod \"controller-69bbfbf88f-7sw8d\" (UID: \"9eec6b61-1b8d-4615-8333-d93630c46ce2\") " pod="metallb-system/controller-69bbfbf88f-7sw8d" Feb 19 10:22:53 crc kubenswrapper[4811]: I0219 10:22:53.088485 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461-metallb-excludel2\") pod \"speaker-jswp7\" (UID: \"4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461\") " pod="metallb-system/speaker-jswp7" Feb 19 10:22:53 crc kubenswrapper[4811]: I0219 10:22:53.089266 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461-metallb-excludel2\") pod \"speaker-jswp7\" (UID: \"4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461\") " pod="metallb-system/speaker-jswp7" Feb 19 10:22:53 crc kubenswrapper[4811]: E0219 10:22:53.089353 4811 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 19 10:22:53 crc kubenswrapper[4811]: E0219 10:22:53.089397 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461-memberlist podName:4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461 nodeName:}" failed. No retries permitted until 2026-02-19 10:22:53.589381218 +0000 UTC m=+862.115845904 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461-memberlist") pod "speaker-jswp7" (UID: "4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461") : secret "metallb-memberlist" not found Feb 19 10:22:53 crc kubenswrapper[4811]: E0219 10:22:53.089969 4811 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Feb 19 10:22:53 crc kubenswrapper[4811]: E0219 10:22:53.090209 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461-metrics-certs podName:4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461 nodeName:}" failed. No retries permitted until 2026-02-19 10:22:53.590197559 +0000 UTC m=+862.116662245 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461-metrics-certs") pod "speaker-jswp7" (UID: "4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461") : secret "speaker-certs-secret" not found Feb 19 10:22:53 crc kubenswrapper[4811]: I0219 10:22:53.115234 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv9w5\" (UniqueName: \"kubernetes.io/projected/4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461-kube-api-access-nv9w5\") pod \"speaker-jswp7\" (UID: \"4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461\") " pod="metallb-system/speaker-jswp7" Feb 19 10:22:53 crc kubenswrapper[4811]: I0219 10:22:53.189149 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rgmc\" (UniqueName: \"kubernetes.io/projected/9eec6b61-1b8d-4615-8333-d93630c46ce2-kube-api-access-4rgmc\") pod \"controller-69bbfbf88f-7sw8d\" (UID: \"9eec6b61-1b8d-4615-8333-d93630c46ce2\") " pod="metallb-system/controller-69bbfbf88f-7sw8d" Feb 19 10:22:53 crc kubenswrapper[4811]: I0219 10:22:53.189196 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9eec6b61-1b8d-4615-8333-d93630c46ce2-cert\") pod \"controller-69bbfbf88f-7sw8d\" (UID: \"9eec6b61-1b8d-4615-8333-d93630c46ce2\") " pod="metallb-system/controller-69bbfbf88f-7sw8d" Feb 19 10:22:53 crc kubenswrapper[4811]: I0219 10:22:53.189262 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9eec6b61-1b8d-4615-8333-d93630c46ce2-metrics-certs\") pod \"controller-69bbfbf88f-7sw8d\" (UID: \"9eec6b61-1b8d-4615-8333-d93630c46ce2\") " pod="metallb-system/controller-69bbfbf88f-7sw8d" Feb 19 10:22:53 crc kubenswrapper[4811]: I0219 10:22:53.192651 4811 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 19 10:22:53 crc kubenswrapper[4811]: I0219 10:22:53.193141 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9eec6b61-1b8d-4615-8333-d93630c46ce2-metrics-certs\") pod \"controller-69bbfbf88f-7sw8d\" (UID: \"9eec6b61-1b8d-4615-8333-d93630c46ce2\") " pod="metallb-system/controller-69bbfbf88f-7sw8d" Feb 19 10:22:53 crc kubenswrapper[4811]: I0219 10:22:53.206610 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9eec6b61-1b8d-4615-8333-d93630c46ce2-cert\") pod \"controller-69bbfbf88f-7sw8d\" (UID: \"9eec6b61-1b8d-4615-8333-d93630c46ce2\") " pod="metallb-system/controller-69bbfbf88f-7sw8d" Feb 19 10:22:53 crc kubenswrapper[4811]: I0219 10:22:53.211838 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rgmc\" (UniqueName: \"kubernetes.io/projected/9eec6b61-1b8d-4615-8333-d93630c46ce2-kube-api-access-4rgmc\") pod \"controller-69bbfbf88f-7sw8d\" (UID: \"9eec6b61-1b8d-4615-8333-d93630c46ce2\") " pod="metallb-system/controller-69bbfbf88f-7sw8d" Feb 19 10:22:53 crc kubenswrapper[4811]: I0219 10:22:53.218393 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-ngn4k" Feb 19 10:22:53 crc kubenswrapper[4811]: I0219 10:22:53.229128 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fx4zj" Feb 19 10:22:53 crc kubenswrapper[4811]: I0219 10:22:53.390285 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-7sw8d" Feb 19 10:22:53 crc kubenswrapper[4811]: I0219 10:22:53.545805 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ngn4k" event={"ID":"84b9332e-fb14-4b7e-844e-d902809b4e4c","Type":"ContainerStarted","Data":"53f21219adcc666b9c4b081daf0a6c50592331b9d92dffdf74a1750bcdf80de4"} Feb 19 10:22:53 crc kubenswrapper[4811]: I0219 10:22:53.619472 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461-memberlist\") pod \"speaker-jswp7\" (UID: \"4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461\") " pod="metallb-system/speaker-jswp7" Feb 19 10:22:53 crc kubenswrapper[4811]: I0219 10:22:53.619554 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461-metrics-certs\") pod \"speaker-jswp7\" (UID: \"4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461\") " pod="metallb-system/speaker-jswp7" Feb 19 10:22:53 crc kubenswrapper[4811]: I0219 10:22:53.620115 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-fx4zj"] Feb 19 10:22:53 crc kubenswrapper[4811]: E0219 10:22:53.620637 4811 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 19 10:22:53 crc kubenswrapper[4811]: E0219 10:22:53.620750 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461-memberlist podName:4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461 nodeName:}" failed. No retries permitted until 2026-02-19 10:22:54.620727603 +0000 UTC m=+863.147192329 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461-memberlist") pod "speaker-jswp7" (UID: "4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461") : secret "metallb-memberlist" not found Feb 19 10:22:53 crc kubenswrapper[4811]: I0219 10:22:53.627722 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461-metrics-certs\") pod \"speaker-jswp7\" (UID: \"4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461\") " pod="metallb-system/speaker-jswp7" Feb 19 10:22:53 crc kubenswrapper[4811]: I0219 10:22:53.779016 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-7sw8d"] Feb 19 10:22:53 crc kubenswrapper[4811]: W0219 10:22:53.790339 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9eec6b61_1b8d_4615_8333_d93630c46ce2.slice/crio-0f0781272a9e6aeb8b7dda5c8d40d02c9fa6a10826844a35f1acaf6d7f5e4271 WatchSource:0}: Error finding container 0f0781272a9e6aeb8b7dda5c8d40d02c9fa6a10826844a35f1acaf6d7f5e4271: Status 404 returned error can't find the container with id 0f0781272a9e6aeb8b7dda5c8d40d02c9fa6a10826844a35f1acaf6d7f5e4271 Feb 19 10:22:54 crc kubenswrapper[4811]: I0219 10:22:54.557062 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-7sw8d" event={"ID":"9eec6b61-1b8d-4615-8333-d93630c46ce2","Type":"ContainerStarted","Data":"083fd0fa1294c2130eeaea96bec911523ae25c515ca5fe5ffd0a227aaa526672"} Feb 19 10:22:54 crc kubenswrapper[4811]: I0219 10:22:54.558070 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-7sw8d" event={"ID":"9eec6b61-1b8d-4615-8333-d93630c46ce2","Type":"ContainerStarted","Data":"e916db893086158e973588a2ff1d046f174db3dbe1d6ae4785972ff6964d7c7d"} Feb 19 10:22:54 crc kubenswrapper[4811]: I0219 10:22:54.558094 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-7sw8d" Feb 19 10:22:54 crc kubenswrapper[4811]: I0219 10:22:54.558107 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-7sw8d" event={"ID":"9eec6b61-1b8d-4615-8333-d93630c46ce2","Type":"ContainerStarted","Data":"0f0781272a9e6aeb8b7dda5c8d40d02c9fa6a10826844a35f1acaf6d7f5e4271"} Feb 19 10:22:54 crc kubenswrapper[4811]: I0219 10:22:54.559145 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fx4zj" event={"ID":"5742d3da-f213-4a03-a18d-55cbe88d82b5","Type":"ContainerStarted","Data":"bb004f5c128047a8491c57359261fb491f8f61e81a15c6ec1899bea1a534f40e"} Feb 19 10:22:54 crc kubenswrapper[4811]: I0219 10:22:54.587795 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-7sw8d" podStartSLOduration=2.587767263 podStartE2EDuration="2.587767263s" podCreationTimestamp="2026-02-19 10:22:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:22:54.575829113 +0000 UTC m=+863.102293799" watchObservedRunningTime="2026-02-19 10:22:54.587767263 +0000 UTC m=+863.114231959" Feb 19 10:22:54 crc kubenswrapper[4811]: I0219 10:22:54.633500 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461-memberlist\") pod \"speaker-jswp7\" (UID: \"4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461\") " pod="metallb-system/speaker-jswp7" Feb 19 10:22:54 crc kubenswrapper[4811]: I0219 10:22:54.658887 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461-memberlist\") pod \"speaker-jswp7\" (UID: \"4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461\") " pod="metallb-system/speaker-jswp7" Feb 19 10:22:54 crc kubenswrapper[4811]: I0219 10:22:54.871772 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-jswp7" Feb 19 10:22:54 crc kubenswrapper[4811]: W0219 10:22:54.901680 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bb4dc5d_a3e6_4fde_b2cb_1d1e5d0e6461.slice/crio-4227e56eaeef3fbf0235a7411e11b1a799be58394f93d1b98ad1b08acde93440 WatchSource:0}: Error finding container 4227e56eaeef3fbf0235a7411e11b1a799be58394f93d1b98ad1b08acde93440: Status 404 returned error can't find the container with id 4227e56eaeef3fbf0235a7411e11b1a799be58394f93d1b98ad1b08acde93440 Feb 19 10:22:55 crc kubenswrapper[4811]: I0219 10:22:55.568825 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jswp7" event={"ID":"4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461","Type":"ContainerStarted","Data":"f2bcece6f8af2aa327bad6d97623154999f48f34476d4bfa33a96ed23673d352"} Feb 19 10:22:55 crc kubenswrapper[4811]: I0219 10:22:55.569221 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jswp7" event={"ID":"4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461","Type":"ContainerStarted","Data":"9555d03cd89ffc81ca1c7207dcadada40273dc85f47de8e419dbf523e981dd09"} Feb 19 10:22:55 crc kubenswrapper[4811]: I0219 10:22:55.569236 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jswp7" event={"ID":"4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461","Type":"ContainerStarted","Data":"4227e56eaeef3fbf0235a7411e11b1a799be58394f93d1b98ad1b08acde93440"} Feb 19 10:22:55 crc kubenswrapper[4811]: I0219 10:22:55.569421 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-jswp7" Feb 19 10:22:55 crc kubenswrapper[4811]: I0219 10:22:55.604437 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-jswp7" podStartSLOduration=3.6044220879999997 podStartE2EDuration="3.604422088s" podCreationTimestamp="2026-02-19 10:22:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:22:55.602606111 +0000 UTC m=+864.129070797" watchObservedRunningTime="2026-02-19 10:22:55.604422088 +0000 UTC m=+864.130886774" Feb 19 10:23:02 crc kubenswrapper[4811]: I0219 10:23:02.630050 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fx4zj" event={"ID":"5742d3da-f213-4a03-a18d-55cbe88d82b5","Type":"ContainerStarted","Data":"748b906157d803fc6cfd5f45ee38ff7a632267af0be0f6fe7d89f3b6709f249a"} Feb 19 10:23:02 crc kubenswrapper[4811]: I0219 10:23:02.631064 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fx4zj" Feb 19 10:23:02 crc kubenswrapper[4811]: I0219 10:23:02.632972 4811 generic.go:334] "Generic (PLEG): container finished" podID="84b9332e-fb14-4b7e-844e-d902809b4e4c" containerID="30615b48c30ba0e5db7bfe562b27326e5850c4f31957d9249b2c756f3d4a1ef9" exitCode=0 Feb 19 10:23:02 crc kubenswrapper[4811]: I0219 10:23:02.633045 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ngn4k" event={"ID":"84b9332e-fb14-4b7e-844e-d902809b4e4c","Type":"ContainerDied","Data":"30615b48c30ba0e5db7bfe562b27326e5850c4f31957d9249b2c756f3d4a1ef9"} Feb 19 10:23:02 crc kubenswrapper[4811]: I0219 10:23:02.657870 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fx4zj" podStartSLOduration=2.57228233 podStartE2EDuration="10.657851034s" podCreationTimestamp="2026-02-19 10:22:52 +0000 UTC" firstStartedPulling="2026-02-19 10:22:53.652354414 +0000 UTC m=+862.178819100" lastFinishedPulling="2026-02-19 10:23:01.737923088 +0000 UTC m=+870.264387804" observedRunningTime="2026-02-19 10:23:02.649721393 +0000 UTC m=+871.176186079" watchObservedRunningTime="2026-02-19 10:23:02.657851034 +0000 UTC m=+871.184315710" Feb 19 10:23:03 crc kubenswrapper[4811]: I0219 10:23:03.394775 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-7sw8d" Feb 19 10:23:03 crc kubenswrapper[4811]: I0219 10:23:03.642415 4811 generic.go:334] "Generic (PLEG): container finished" podID="84b9332e-fb14-4b7e-844e-d902809b4e4c" containerID="8c23c0cfd1c1df9067124b9bb66ccf9f98e2887d110e41af646b04b4efcd0efe" exitCode=0 Feb 19 10:23:03 crc kubenswrapper[4811]: I0219 10:23:03.642483 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ngn4k" event={"ID":"84b9332e-fb14-4b7e-844e-d902809b4e4c","Type":"ContainerDied","Data":"8c23c0cfd1c1df9067124b9bb66ccf9f98e2887d110e41af646b04b4efcd0efe"} Feb 19 10:23:04 crc kubenswrapper[4811]: I0219 10:23:04.654204 4811 generic.go:334] "Generic (PLEG): container finished" podID="84b9332e-fb14-4b7e-844e-d902809b4e4c" containerID="13b8aae564e99e0186f31a6e7fb8a8c3288f9dc682c4fda68e89cfbde6c55415" exitCode=0 Feb 19 10:23:04 crc kubenswrapper[4811]: I0219 10:23:04.654276 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ngn4k" event={"ID":"84b9332e-fb14-4b7e-844e-d902809b4e4c","Type":"ContainerDied","Data":"13b8aae564e99e0186f31a6e7fb8a8c3288f9dc682c4fda68e89cfbde6c55415"} Feb 19 10:23:05 crc kubenswrapper[4811]: I0219 10:23:05.676633 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ngn4k" event={"ID":"84b9332e-fb14-4b7e-844e-d902809b4e4c","Type":"ContainerStarted","Data":"4fc39fade383d172bd75d2f81e109ec75f7cee4b0a4c295eb0bdbb21403cda82"} Feb 19 10:23:05 crc kubenswrapper[4811]: I0219 10:23:05.677211 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ngn4k" event={"ID":"84b9332e-fb14-4b7e-844e-d902809b4e4c","Type":"ContainerStarted","Data":"e44206afe76a512e749dfe53e0552cf1b9f0837a94a620efd65d07984d1e6530"} Feb 19 10:23:05 crc kubenswrapper[4811]: I0219 10:23:05.677230 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ngn4k" event={"ID":"84b9332e-fb14-4b7e-844e-d902809b4e4c","Type":"ContainerStarted","Data":"449483258e82f1b625bcea705cd53c7eebb91cca9039ac18102efd9d825e0b05"} Feb 19 10:23:05 crc kubenswrapper[4811]: I0219 10:23:05.677243 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ngn4k" event={"ID":"84b9332e-fb14-4b7e-844e-d902809b4e4c","Type":"ContainerStarted","Data":"7e6548a0d8a5d768b50545fcad98e77aa29e0e57d018928d7cc2f31b5b38ac51"} Feb 19 10:23:05 crc kubenswrapper[4811]: I0219 10:23:05.677259 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ngn4k" event={"ID":"84b9332e-fb14-4b7e-844e-d902809b4e4c","Type":"ContainerStarted","Data":"97750cf3e9c7b25c342f730cede82c7b9ea0a0b05d2a75bdf92e076a07b4fd71"} Feb 19 10:23:06 crc kubenswrapper[4811]: I0219 10:23:06.688912 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ngn4k" event={"ID":"84b9332e-fb14-4b7e-844e-d902809b4e4c","Type":"ContainerStarted","Data":"a2e7a1daa88c7fb0228a14676a3fab952d52c1286e44db1d5d9b17c2f562c8f6"} Feb 19 10:23:06 crc kubenswrapper[4811]: I0219 10:23:06.689115 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-ngn4k" Feb 19 10:23:08 crc kubenswrapper[4811]: I0219 10:23:08.219466 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-ngn4k" Feb 19 10:23:08 crc kubenswrapper[4811]: I0219 10:23:08.291252 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-ngn4k" Feb 19 10:23:08 crc kubenswrapper[4811]: I0219 10:23:08.329559 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-ngn4k" podStartSLOduration=7.983113042 podStartE2EDuration="16.329529953s" podCreationTimestamp="2026-02-19 10:22:52 +0000 UTC" firstStartedPulling="2026-02-19 10:22:53.362921015 +0000 UTC m=+861.889385701" lastFinishedPulling="2026-02-19 10:23:01.709337926 +0000 UTC m=+870.235802612" observedRunningTime="2026-02-19 10:23:06.716709659 +0000 UTC m=+875.243174355" watchObservedRunningTime="2026-02-19 10:23:08.329529953 +0000 UTC m=+876.855994639" Feb 19 10:23:13 crc kubenswrapper[4811]: I0219 10:23:13.235705 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fx4zj" Feb 19 10:23:14 crc kubenswrapper[4811]: I0219 10:23:14.878438 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-jswp7" Feb 19 10:23:17 crc kubenswrapper[4811]: I0219 10:23:17.477407 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-ldg8d"] Feb 19 10:23:17 crc kubenswrapper[4811]: I0219 10:23:17.478409 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ldg8d" Feb 19 10:23:17 crc kubenswrapper[4811]: I0219 10:23:17.480873 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 19 10:23:17 crc kubenswrapper[4811]: I0219 10:23:17.481263 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-wl9k2" Feb 19 10:23:17 crc kubenswrapper[4811]: I0219 10:23:17.482138 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 19 10:23:17 crc kubenswrapper[4811]: I0219 10:23:17.532847 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ldg8d"] Feb 19 10:23:17 crc kubenswrapper[4811]: I0219 10:23:17.591738 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trglm\" (UniqueName: \"kubernetes.io/projected/131c8975-2e08-497d-945f-4e465a3a2b87-kube-api-access-trglm\") pod \"openstack-operator-index-ldg8d\" (UID: \"131c8975-2e08-497d-945f-4e465a3a2b87\") " pod="openstack-operators/openstack-operator-index-ldg8d" Feb 19 10:23:17 crc kubenswrapper[4811]: I0219 10:23:17.693637 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trglm\" (UniqueName: \"kubernetes.io/projected/131c8975-2e08-497d-945f-4e465a3a2b87-kube-api-access-trglm\") pod \"openstack-operator-index-ldg8d\" (UID: \"131c8975-2e08-497d-945f-4e465a3a2b87\") " pod="openstack-operators/openstack-operator-index-ldg8d" Feb 19 10:23:17 crc kubenswrapper[4811]: I0219 10:23:17.716786 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trglm\" (UniqueName: \"kubernetes.io/projected/131c8975-2e08-497d-945f-4e465a3a2b87-kube-api-access-trglm\") pod \"openstack-operator-index-ldg8d\" (UID: \"131c8975-2e08-497d-945f-4e465a3a2b87\") " pod="openstack-operators/openstack-operator-index-ldg8d" Feb 19 10:23:17 crc kubenswrapper[4811]: I0219 10:23:17.836752 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ldg8d" Feb 19 10:23:18 crc kubenswrapper[4811]: I0219 10:23:18.289357 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ldg8d"] Feb 19 10:23:18 crc kubenswrapper[4811]: W0219 10:23:18.297883 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod131c8975_2e08_497d_945f_4e465a3a2b87.slice/crio-87b9e962d36c63c1ddbd8a12950be66a3e0db554c1ec16e2c9d6230ccb4ef6b8 WatchSource:0}: Error finding container 87b9e962d36c63c1ddbd8a12950be66a3e0db554c1ec16e2c9d6230ccb4ef6b8: Status 404 returned error can't find the container with id 87b9e962d36c63c1ddbd8a12950be66a3e0db554c1ec16e2c9d6230ccb4ef6b8 Feb 19 10:23:18 crc kubenswrapper[4811]: I0219 10:23:18.785424 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ldg8d" event={"ID":"131c8975-2e08-497d-945f-4e465a3a2b87","Type":"ContainerStarted","Data":"87b9e962d36c63c1ddbd8a12950be66a3e0db554c1ec16e2c9d6230ccb4ef6b8"} Feb 19 10:23:20 crc kubenswrapper[4811]: I0219 10:23:20.258217 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-ldg8d"] Feb 19 10:23:20 crc kubenswrapper[4811]: I0219 10:23:20.800321 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ldg8d" event={"ID":"131c8975-2e08-497d-945f-4e465a3a2b87","Type":"ContainerStarted","Data":"e9a3a484af3bb96120ef7a9e031c3c3a00e65c556fe6300713b804d516a010cf"} Feb 19 10:23:20 crc kubenswrapper[4811]: I0219 10:23:20.800534 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-ldg8d" podUID="131c8975-2e08-497d-945f-4e465a3a2b87" containerName="registry-server" containerID="cri-o://e9a3a484af3bb96120ef7a9e031c3c3a00e65c556fe6300713b804d516a010cf" gracePeriod=2 Feb 19 10:23:20 crc kubenswrapper[4811]: I0219 10:23:20.828818 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-ldg8d" podStartSLOduration=1.520890503 podStartE2EDuration="3.82879277s" podCreationTimestamp="2026-02-19 10:23:17 +0000 UTC" firstStartedPulling="2026-02-19 10:23:18.301397758 +0000 UTC m=+886.827862454" lastFinishedPulling="2026-02-19 10:23:20.609300045 +0000 UTC m=+889.135764721" observedRunningTime="2026-02-19 10:23:20.823550784 +0000 UTC m=+889.350015470" watchObservedRunningTime="2026-02-19 10:23:20.82879277 +0000 UTC m=+889.355257466" Feb 19 10:23:20 crc kubenswrapper[4811]: I0219 10:23:20.864195 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-9fpk8"] Feb 19 10:23:20 crc kubenswrapper[4811]: I0219 10:23:20.865104 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9fpk8" Feb 19 10:23:20 crc kubenswrapper[4811]: I0219 10:23:20.876532 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9fpk8"] Feb 19 10:23:20 crc kubenswrapper[4811]: I0219 10:23:20.945234 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbqjg\" (UniqueName: \"kubernetes.io/projected/f23b3244-ec1f-411f-8dd2-15513a0c6c17-kube-api-access-bbqjg\") pod \"openstack-operator-index-9fpk8\" (UID: \"f23b3244-ec1f-411f-8dd2-15513a0c6c17\") " pod="openstack-operators/openstack-operator-index-9fpk8" Feb 19 10:23:21 crc kubenswrapper[4811]: I0219 10:23:21.047296 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbqjg\" (UniqueName: \"kubernetes.io/projected/f23b3244-ec1f-411f-8dd2-15513a0c6c17-kube-api-access-bbqjg\") pod \"openstack-operator-index-9fpk8\" (UID: \"f23b3244-ec1f-411f-8dd2-15513a0c6c17\") " pod="openstack-operators/openstack-operator-index-9fpk8" Feb 19 10:23:21 crc kubenswrapper[4811]: I0219 10:23:21.077521 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbqjg\" (UniqueName: \"kubernetes.io/projected/f23b3244-ec1f-411f-8dd2-15513a0c6c17-kube-api-access-bbqjg\") pod \"openstack-operator-index-9fpk8\" (UID: \"f23b3244-ec1f-411f-8dd2-15513a0c6c17\") " pod="openstack-operators/openstack-operator-index-9fpk8" Feb 19 10:23:21 crc kubenswrapper[4811]: I0219 10:23:21.157985 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ldg8d" Feb 19 10:23:21 crc kubenswrapper[4811]: I0219 10:23:21.250100 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trglm\" (UniqueName: \"kubernetes.io/projected/131c8975-2e08-497d-945f-4e465a3a2b87-kube-api-access-trglm\") pod \"131c8975-2e08-497d-945f-4e465a3a2b87\" (UID: \"131c8975-2e08-497d-945f-4e465a3a2b87\") " Feb 19 10:23:21 crc kubenswrapper[4811]: I0219 10:23:21.253542 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/131c8975-2e08-497d-945f-4e465a3a2b87-kube-api-access-trglm" (OuterVolumeSpecName: "kube-api-access-trglm") pod "131c8975-2e08-497d-945f-4e465a3a2b87" (UID: "131c8975-2e08-497d-945f-4e465a3a2b87"). InnerVolumeSpecName "kube-api-access-trglm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:23:21 crc kubenswrapper[4811]: I0219 10:23:21.255929 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9fpk8" Feb 19 10:23:21 crc kubenswrapper[4811]: I0219 10:23:21.352807 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trglm\" (UniqueName: \"kubernetes.io/projected/131c8975-2e08-497d-945f-4e465a3a2b87-kube-api-access-trglm\") on node \"crc\" DevicePath \"\"" Feb 19 10:23:21 crc kubenswrapper[4811]: I0219 10:23:21.682979 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9fpk8"] Feb 19 10:23:21 crc kubenswrapper[4811]: W0219 10:23:21.694666 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf23b3244_ec1f_411f_8dd2_15513a0c6c17.slice/crio-860ad055dfddde17495be90c7e8f77998d4953be8226c7c95f28229dff36cf6c WatchSource:0}: Error finding container 860ad055dfddde17495be90c7e8f77998d4953be8226c7c95f28229dff36cf6c: Status 404 returned error can't find the container with id 860ad055dfddde17495be90c7e8f77998d4953be8226c7c95f28229dff36cf6c Feb 19 10:23:21 crc kubenswrapper[4811]: I0219 10:23:21.807892 4811 generic.go:334] "Generic (PLEG): container finished" podID="131c8975-2e08-497d-945f-4e465a3a2b87" containerID="e9a3a484af3bb96120ef7a9e031c3c3a00e65c556fe6300713b804d516a010cf" exitCode=0 Feb 19 10:23:21 crc kubenswrapper[4811]: I0219 10:23:21.807941 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ldg8d" event={"ID":"131c8975-2e08-497d-945f-4e465a3a2b87","Type":"ContainerDied","Data":"e9a3a484af3bb96120ef7a9e031c3c3a00e65c556fe6300713b804d516a010cf"} Feb 19 10:23:21 crc kubenswrapper[4811]: I0219 10:23:21.807974 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ldg8d" event={"ID":"131c8975-2e08-497d-945f-4e465a3a2b87","Type":"ContainerDied","Data":"87b9e962d36c63c1ddbd8a12950be66a3e0db554c1ec16e2c9d6230ccb4ef6b8"} Feb 19 10:23:21 crc kubenswrapper[4811]: I0219 10:23:21.807991 4811 scope.go:117] "RemoveContainer" containerID="e9a3a484af3bb96120ef7a9e031c3c3a00e65c556fe6300713b804d516a010cf" Feb 19 10:23:21 crc kubenswrapper[4811]: I0219 10:23:21.807924 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ldg8d" Feb 19 10:23:21 crc kubenswrapper[4811]: I0219 10:23:21.811021 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9fpk8" event={"ID":"f23b3244-ec1f-411f-8dd2-15513a0c6c17","Type":"ContainerStarted","Data":"860ad055dfddde17495be90c7e8f77998d4953be8226c7c95f28229dff36cf6c"} Feb 19 10:23:21 crc kubenswrapper[4811]: I0219 10:23:21.838489 4811 scope.go:117] "RemoveContainer" containerID="e9a3a484af3bb96120ef7a9e031c3c3a00e65c556fe6300713b804d516a010cf" Feb 19 10:23:21 crc kubenswrapper[4811]: E0219 10:23:21.838976 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9a3a484af3bb96120ef7a9e031c3c3a00e65c556fe6300713b804d516a010cf\": container with ID starting with e9a3a484af3bb96120ef7a9e031c3c3a00e65c556fe6300713b804d516a010cf not found: ID does not exist" containerID="e9a3a484af3bb96120ef7a9e031c3c3a00e65c556fe6300713b804d516a010cf" Feb 19 10:23:21 crc kubenswrapper[4811]: I0219 10:23:21.839019 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9a3a484af3bb96120ef7a9e031c3c3a00e65c556fe6300713b804d516a010cf"} err="failed to get container status \"e9a3a484af3bb96120ef7a9e031c3c3a00e65c556fe6300713b804d516a010cf\": rpc error: code = NotFound desc = could not find container \"e9a3a484af3bb96120ef7a9e031c3c3a00e65c556fe6300713b804d516a010cf\": container with ID starting with e9a3a484af3bb96120ef7a9e031c3c3a00e65c556fe6300713b804d516a010cf not found: ID does not exist" Feb 19 10:23:21 crc kubenswrapper[4811]: I0219 10:23:21.862255 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-ldg8d"] Feb 19 10:23:21 crc kubenswrapper[4811]: I0219 10:23:21.865763 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-ldg8d"] Feb 19 10:23:22 crc kubenswrapper[4811]: I0219 10:23:22.597124 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="131c8975-2e08-497d-945f-4e465a3a2b87" path="/var/lib/kubelet/pods/131c8975-2e08-497d-945f-4e465a3a2b87/volumes" Feb 19 10:23:22 crc kubenswrapper[4811]: I0219 10:23:22.822403 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9fpk8" event={"ID":"f23b3244-ec1f-411f-8dd2-15513a0c6c17","Type":"ContainerStarted","Data":"fa67349b1a50ad413a70b338bd0c7ff85c36779175d4c787a0538e38f76c039b"} Feb 19 10:23:22 crc kubenswrapper[4811]: I0219 10:23:22.840904 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-9fpk8" podStartSLOduration=2.787616742 podStartE2EDuration="2.840878634s" podCreationTimestamp="2026-02-19 10:23:20 +0000 UTC" firstStartedPulling="2026-02-19 10:23:21.697535089 +0000 UTC m=+890.223999775" lastFinishedPulling="2026-02-19 10:23:21.750796961 +0000 UTC m=+890.277261667" observedRunningTime="2026-02-19 10:23:22.837160617 +0000 UTC m=+891.363625303" watchObservedRunningTime="2026-02-19 10:23:22.840878634 +0000 UTC m=+891.367343340" Feb 19 10:23:23 crc kubenswrapper[4811]: I0219 10:23:23.221269 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-ngn4k" Feb 19 10:23:31 crc kubenswrapper[4811]: I0219 10:23:31.256588 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-9fpk8" Feb 19 10:23:31 crc kubenswrapper[4811]: I0219 10:23:31.257225 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-9fpk8" Feb 19 10:23:31 crc kubenswrapper[4811]: I0219 10:23:31.287372 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-9fpk8" Feb 19 10:23:31 crc kubenswrapper[4811]: I0219 10:23:31.918737 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-9fpk8" Feb 19 10:23:33 crc kubenswrapper[4811]: I0219 10:23:33.919123 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb"] Feb 19 10:23:33 crc kubenswrapper[4811]: E0219 10:23:33.919965 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="131c8975-2e08-497d-945f-4e465a3a2b87" containerName="registry-server" Feb 19 10:23:33 crc kubenswrapper[4811]: I0219 10:23:33.919983 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="131c8975-2e08-497d-945f-4e465a3a2b87" containerName="registry-server" Feb 19 10:23:33 crc kubenswrapper[4811]: I0219 10:23:33.920145 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="131c8975-2e08-497d-945f-4e465a3a2b87" containerName="registry-server" Feb 19 10:23:33 crc kubenswrapper[4811]: I0219 10:23:33.921264 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb" Feb 19 10:23:33 crc kubenswrapper[4811]: I0219 10:23:33.923615 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-hnn56" Feb 19 10:23:33 crc kubenswrapper[4811]: I0219 10:23:33.931287 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb"] Feb 19 10:23:34 crc kubenswrapper[4811]: I0219 10:23:34.058287 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d439e144-6344-450b-bf1c-d141a94619d0-bundle\") pod \"4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb\" (UID: \"d439e144-6344-450b-bf1c-d141a94619d0\") " pod="openstack-operators/4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb" Feb 19 10:23:34 crc kubenswrapper[4811]: I0219 10:23:34.058356 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wdkt\" (UniqueName: \"kubernetes.io/projected/d439e144-6344-450b-bf1c-d141a94619d0-kube-api-access-6wdkt\") pod \"4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb\" (UID: \"d439e144-6344-450b-bf1c-d141a94619d0\") " pod="openstack-operators/4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb" Feb 19 10:23:34 crc kubenswrapper[4811]: I0219 10:23:34.058631 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d439e144-6344-450b-bf1c-d141a94619d0-util\") pod \"4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb\" (UID: \"d439e144-6344-450b-bf1c-d141a94619d0\") " pod="openstack-operators/4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb" Feb 19 10:23:34 crc kubenswrapper[4811]: I0219 10:23:34.160312 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d439e144-6344-450b-bf1c-d141a94619d0-util\") pod \"4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb\" (UID: \"d439e144-6344-450b-bf1c-d141a94619d0\") " pod="openstack-operators/4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb" Feb 19 10:23:34 crc kubenswrapper[4811]: I0219 10:23:34.160420 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d439e144-6344-450b-bf1c-d141a94619d0-bundle\") pod \"4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb\" (UID: \"d439e144-6344-450b-bf1c-d141a94619d0\") " pod="openstack-operators/4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb" Feb 19 10:23:34 crc kubenswrapper[4811]: I0219 10:23:34.160464 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wdkt\" (UniqueName: \"kubernetes.io/projected/d439e144-6344-450b-bf1c-d141a94619d0-kube-api-access-6wdkt\") pod \"4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb\" (UID: \"d439e144-6344-450b-bf1c-d141a94619d0\") " pod="openstack-operators/4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb" Feb 19 10:23:34 crc kubenswrapper[4811]: I0219 10:23:34.161004 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d439e144-6344-450b-bf1c-d141a94619d0-util\") pod \"4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb\" (UID: \"d439e144-6344-450b-bf1c-d141a94619d0\") " pod="openstack-operators/4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb" Feb 19 10:23:34 crc kubenswrapper[4811]: I0219 10:23:34.161166 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d439e144-6344-450b-bf1c-d141a94619d0-bundle\") pod \"4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb\" (UID: \"d439e144-6344-450b-bf1c-d141a94619d0\") " pod="openstack-operators/4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb" Feb 19 10:23:34 crc kubenswrapper[4811]: I0219 10:23:34.183548 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wdkt\" (UniqueName: \"kubernetes.io/projected/d439e144-6344-450b-bf1c-d141a94619d0-kube-api-access-6wdkt\") pod \"4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb\" (UID: \"d439e144-6344-450b-bf1c-d141a94619d0\") " pod="openstack-operators/4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb" Feb 19 10:23:34 crc kubenswrapper[4811]: I0219 10:23:34.245932 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb" Feb 19 10:23:34 crc kubenswrapper[4811]: I0219 10:23:34.703664 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb"] Feb 19 10:23:34 crc kubenswrapper[4811]: W0219 10:23:34.704335 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd439e144_6344_450b_bf1c_d141a94619d0.slice/crio-8d70f14545658e5c0459ae886fdb0f682859c2306386bfc17a2d340b46c159cf WatchSource:0}: Error finding container 8d70f14545658e5c0459ae886fdb0f682859c2306386bfc17a2d340b46c159cf: Status 404 returned error can't find the container with id 8d70f14545658e5c0459ae886fdb0f682859c2306386bfc17a2d340b46c159cf Feb 19 10:23:34 crc kubenswrapper[4811]: I0219 10:23:34.914847 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb" event={"ID":"d439e144-6344-450b-bf1c-d141a94619d0","Type":"ContainerStarted","Data":"aa7bd21f9d733db238be94caa872d04f64c435a7f0399c537c0fedbd275e68c0"} Feb 19 10:23:34 crc kubenswrapper[4811]: I0219 10:23:34.914901 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb" event={"ID":"d439e144-6344-450b-bf1c-d141a94619d0","Type":"ContainerStarted","Data":"8d70f14545658e5c0459ae886fdb0f682859c2306386bfc17a2d340b46c159cf"} Feb 19 10:23:35 crc kubenswrapper[4811]: I0219 10:23:35.923537 4811 generic.go:334] "Generic (PLEG): container finished" podID="d439e144-6344-450b-bf1c-d141a94619d0" containerID="aa7bd21f9d733db238be94caa872d04f64c435a7f0399c537c0fedbd275e68c0" exitCode=0 Feb 19 10:23:35 crc kubenswrapper[4811]: I0219 10:23:35.923590 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb" event={"ID":"d439e144-6344-450b-bf1c-d141a94619d0","Type":"ContainerDied","Data":"aa7bd21f9d733db238be94caa872d04f64c435a7f0399c537c0fedbd275e68c0"} Feb 19 10:23:36 crc kubenswrapper[4811]: I0219 10:23:36.932395 4811 generic.go:334] "Generic (PLEG): container finished" podID="d439e144-6344-450b-bf1c-d141a94619d0" containerID="81ae497ea2f54c68aab5dbaf8d5ee5fc34839c654964029b90ddced9ec5c4999" exitCode=0 Feb 19 10:23:36 crc kubenswrapper[4811]: I0219 10:23:36.932484 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb" event={"ID":"d439e144-6344-450b-bf1c-d141a94619d0","Type":"ContainerDied","Data":"81ae497ea2f54c68aab5dbaf8d5ee5fc34839c654964029b90ddced9ec5c4999"} Feb 19 10:23:37 crc kubenswrapper[4811]: I0219 10:23:37.942378 4811 generic.go:334] "Generic (PLEG): container finished" podID="d439e144-6344-450b-bf1c-d141a94619d0" containerID="667ad0739973305f258139b768b70ca232c80b109d3df1187d6b9b1c6afcee07" exitCode=0 Feb 19 10:23:37 crc kubenswrapper[4811]: I0219 10:23:37.942428 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb" event={"ID":"d439e144-6344-450b-bf1c-d141a94619d0","Type":"ContainerDied","Data":"667ad0739973305f258139b768b70ca232c80b109d3df1187d6b9b1c6afcee07"} Feb 19 10:23:39 crc kubenswrapper[4811]: I0219 10:23:39.250631 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb" Feb 19 10:23:39 crc kubenswrapper[4811]: I0219 10:23:39.334255 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d439e144-6344-450b-bf1c-d141a94619d0-bundle\") pod \"d439e144-6344-450b-bf1c-d141a94619d0\" (UID: \"d439e144-6344-450b-bf1c-d141a94619d0\") " Feb 19 10:23:39 crc kubenswrapper[4811]: I0219 10:23:39.334412 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wdkt\" (UniqueName: \"kubernetes.io/projected/d439e144-6344-450b-bf1c-d141a94619d0-kube-api-access-6wdkt\") pod \"d439e144-6344-450b-bf1c-d141a94619d0\" (UID: \"d439e144-6344-450b-bf1c-d141a94619d0\") " Feb 19 10:23:39 crc kubenswrapper[4811]: I0219 10:23:39.334502 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d439e144-6344-450b-bf1c-d141a94619d0-util\") pod \"d439e144-6344-450b-bf1c-d141a94619d0\" (UID: \"d439e144-6344-450b-bf1c-d141a94619d0\") " Feb 19 10:23:39 crc kubenswrapper[4811]: I0219 10:23:39.334982 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d439e144-6344-450b-bf1c-d141a94619d0-bundle" (OuterVolumeSpecName: "bundle") pod "d439e144-6344-450b-bf1c-d141a94619d0" (UID: "d439e144-6344-450b-bf1c-d141a94619d0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:23:39 crc kubenswrapper[4811]: I0219 10:23:39.339593 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d439e144-6344-450b-bf1c-d141a94619d0-kube-api-access-6wdkt" (OuterVolumeSpecName: "kube-api-access-6wdkt") pod "d439e144-6344-450b-bf1c-d141a94619d0" (UID: "d439e144-6344-450b-bf1c-d141a94619d0"). InnerVolumeSpecName "kube-api-access-6wdkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:23:39 crc kubenswrapper[4811]: I0219 10:23:39.348753 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d439e144-6344-450b-bf1c-d141a94619d0-util" (OuterVolumeSpecName: "util") pod "d439e144-6344-450b-bf1c-d141a94619d0" (UID: "d439e144-6344-450b-bf1c-d141a94619d0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:23:39 crc kubenswrapper[4811]: I0219 10:23:39.436591 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wdkt\" (UniqueName: \"kubernetes.io/projected/d439e144-6344-450b-bf1c-d141a94619d0-kube-api-access-6wdkt\") on node \"crc\" DevicePath \"\"" Feb 19 10:23:39 crc kubenswrapper[4811]: I0219 10:23:39.436682 4811 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d439e144-6344-450b-bf1c-d141a94619d0-util\") on node \"crc\" DevicePath \"\"" Feb 19 10:23:39 crc kubenswrapper[4811]: I0219 10:23:39.436703 4811 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d439e144-6344-450b-bf1c-d141a94619d0-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:23:39 crc kubenswrapper[4811]: I0219 10:23:39.994757 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb" event={"ID":"d439e144-6344-450b-bf1c-d141a94619d0","Type":"ContainerDied","Data":"8d70f14545658e5c0459ae886fdb0f682859c2306386bfc17a2d340b46c159cf"} Feb 19 10:23:39 crc kubenswrapper[4811]: I0219 10:23:39.995285 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d70f14545658e5c0459ae886fdb0f682859c2306386bfc17a2d340b46c159cf" Feb 19 10:23:39 crc kubenswrapper[4811]: I0219 10:23:39.994836 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb" Feb 19 10:23:46 crc kubenswrapper[4811]: I0219 10:23:46.676603 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-d974764c4-flc2t"] Feb 19 10:23:46 crc kubenswrapper[4811]: E0219 10:23:46.677147 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d439e144-6344-450b-bf1c-d141a94619d0" containerName="util" Feb 19 10:23:46 crc kubenswrapper[4811]: I0219 10:23:46.677162 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="d439e144-6344-450b-bf1c-d141a94619d0" containerName="util" Feb 19 10:23:46 crc kubenswrapper[4811]: E0219 10:23:46.677180 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d439e144-6344-450b-bf1c-d141a94619d0" containerName="pull" Feb 19 10:23:46 crc kubenswrapper[4811]: I0219 10:23:46.677188 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="d439e144-6344-450b-bf1c-d141a94619d0" containerName="pull" Feb 19 10:23:46 crc kubenswrapper[4811]: E0219 10:23:46.677204 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d439e144-6344-450b-bf1c-d141a94619d0" containerName="extract" Feb 19 10:23:46 crc kubenswrapper[4811]: I0219 10:23:46.677213 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="d439e144-6344-450b-bf1c-d141a94619d0" containerName="extract" Feb 19 10:23:46 crc kubenswrapper[4811]: I0219 10:23:46.677350 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="d439e144-6344-450b-bf1c-d141a94619d0" containerName="extract" Feb 19 10:23:46 crc kubenswrapper[4811]: I0219 10:23:46.677846 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-d974764c4-flc2t" Feb 19 10:23:46 crc kubenswrapper[4811]: I0219 10:23:46.680530 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-426vt" Feb 19 10:23:46 crc kubenswrapper[4811]: I0219 10:23:46.708261 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-d974764c4-flc2t"] Feb 19 10:23:46 crc kubenswrapper[4811]: I0219 10:23:46.737792 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbmhf\" (UniqueName: \"kubernetes.io/projected/540975d2-b9cd-492e-9618-c4d31baf74a6-kube-api-access-kbmhf\") pod \"openstack-operator-controller-init-d974764c4-flc2t\" (UID: \"540975d2-b9cd-492e-9618-c4d31baf74a6\") " pod="openstack-operators/openstack-operator-controller-init-d974764c4-flc2t" Feb 19 10:23:46 crc kubenswrapper[4811]: I0219 10:23:46.839251 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbmhf\" (UniqueName: \"kubernetes.io/projected/540975d2-b9cd-492e-9618-c4d31baf74a6-kube-api-access-kbmhf\") pod \"openstack-operator-controller-init-d974764c4-flc2t\" (UID: \"540975d2-b9cd-492e-9618-c4d31baf74a6\") " pod="openstack-operators/openstack-operator-controller-init-d974764c4-flc2t" Feb 19 10:23:46 crc kubenswrapper[4811]: I0219 10:23:46.861493 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbmhf\" (UniqueName: \"kubernetes.io/projected/540975d2-b9cd-492e-9618-c4d31baf74a6-kube-api-access-kbmhf\") pod \"openstack-operator-controller-init-d974764c4-flc2t\" (UID: \"540975d2-b9cd-492e-9618-c4d31baf74a6\") " pod="openstack-operators/openstack-operator-controller-init-d974764c4-flc2t" Feb 19 10:23:46 crc kubenswrapper[4811]: I0219 10:23:46.999881 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-d974764c4-flc2t" Feb 19 10:23:47 crc kubenswrapper[4811]: I0219 10:23:47.496498 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-d974764c4-flc2t"] Feb 19 10:23:48 crc kubenswrapper[4811]: I0219 10:23:48.049641 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-d974764c4-flc2t" event={"ID":"540975d2-b9cd-492e-9618-c4d31baf74a6","Type":"ContainerStarted","Data":"51d680a0628868b36d8fd86f215d515799a02c9fb0d6fc5d8fc239c528467f02"} Feb 19 10:23:52 crc kubenswrapper[4811]: I0219 10:23:52.075759 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-d974764c4-flc2t" event={"ID":"540975d2-b9cd-492e-9618-c4d31baf74a6","Type":"ContainerStarted","Data":"9cfe5c47a02050e37be86438a2663cbab0174929e2d468ada618fccd1a0ebd6e"} Feb 19 10:23:52 crc kubenswrapper[4811]: I0219 10:23:52.076376 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-d974764c4-flc2t" Feb 19 10:23:52 crc kubenswrapper[4811]: I0219 10:23:52.109543 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-d974764c4-flc2t" podStartSLOduration=2.151110643 podStartE2EDuration="6.109524933s" podCreationTimestamp="2026-02-19 10:23:46 +0000 UTC" firstStartedPulling="2026-02-19 10:23:47.503481121 +0000 UTC m=+916.029945807" lastFinishedPulling="2026-02-19 10:23:51.461895411 +0000 UTC m=+919.988360097" observedRunningTime="2026-02-19 10:23:52.103268531 +0000 UTC m=+920.629733217" watchObservedRunningTime="2026-02-19 10:23:52.109524933 +0000 UTC m=+920.635989619" Feb 19 10:23:57 crc kubenswrapper[4811]: I0219 10:23:57.003961 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-d974764c4-flc2t" Feb 19 10:24:12 crc kubenswrapper[4811]: I0219 10:24:12.534039 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tm6gw"] Feb 19 10:24:12 crc kubenswrapper[4811]: I0219 10:24:12.536694 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tm6gw" Feb 19 10:24:12 crc kubenswrapper[4811]: I0219 10:24:12.546266 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tm6gw"] Feb 19 10:24:12 crc kubenswrapper[4811]: I0219 10:24:12.724906 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9rg4\" (UniqueName: \"kubernetes.io/projected/1e7e0a50-1704-4887-ab61-3e8e69345ab9-kube-api-access-n9rg4\") pod \"community-operators-tm6gw\" (UID: \"1e7e0a50-1704-4887-ab61-3e8e69345ab9\") " pod="openshift-marketplace/community-operators-tm6gw" Feb 19 10:24:12 crc kubenswrapper[4811]: I0219 10:24:12.725016 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e7e0a50-1704-4887-ab61-3e8e69345ab9-catalog-content\") pod \"community-operators-tm6gw\" (UID: \"1e7e0a50-1704-4887-ab61-3e8e69345ab9\") " pod="openshift-marketplace/community-operators-tm6gw" Feb 19 10:24:12 crc kubenswrapper[4811]: I0219 10:24:12.725055 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e7e0a50-1704-4887-ab61-3e8e69345ab9-utilities\") pod \"community-operators-tm6gw\" (UID: \"1e7e0a50-1704-4887-ab61-3e8e69345ab9\") " pod="openshift-marketplace/community-operators-tm6gw" Feb 19 10:24:12 crc kubenswrapper[4811]: I0219 10:24:12.833244 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9rg4\" (UniqueName: \"kubernetes.io/projected/1e7e0a50-1704-4887-ab61-3e8e69345ab9-kube-api-access-n9rg4\") pod \"community-operators-tm6gw\" (UID: \"1e7e0a50-1704-4887-ab61-3e8e69345ab9\") " pod="openshift-marketplace/community-operators-tm6gw" Feb 19 10:24:12 crc kubenswrapper[4811]: I0219 10:24:12.833545 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e7e0a50-1704-4887-ab61-3e8e69345ab9-catalog-content\") pod \"community-operators-tm6gw\" (UID: \"1e7e0a50-1704-4887-ab61-3e8e69345ab9\") " pod="openshift-marketplace/community-operators-tm6gw" Feb 19 10:24:12 crc kubenswrapper[4811]: I0219 10:24:12.833656 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e7e0a50-1704-4887-ab61-3e8e69345ab9-utilities\") pod \"community-operators-tm6gw\" (UID: \"1e7e0a50-1704-4887-ab61-3e8e69345ab9\") " pod="openshift-marketplace/community-operators-tm6gw" Feb 19 10:24:12 crc kubenswrapper[4811]: I0219 10:24:12.834060 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e7e0a50-1704-4887-ab61-3e8e69345ab9-catalog-content\") pod \"community-operators-tm6gw\" (UID: \"1e7e0a50-1704-4887-ab61-3e8e69345ab9\") " pod="openshift-marketplace/community-operators-tm6gw" Feb 19 10:24:12 crc kubenswrapper[4811]: I0219 10:24:12.834205 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e7e0a50-1704-4887-ab61-3e8e69345ab9-utilities\") pod \"community-operators-tm6gw\" (UID: \"1e7e0a50-1704-4887-ab61-3e8e69345ab9\") " pod="openshift-marketplace/community-operators-tm6gw" Feb 19 10:24:12 crc kubenswrapper[4811]: I0219 10:24:12.857177 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9rg4\" (UniqueName: \"kubernetes.io/projected/1e7e0a50-1704-4887-ab61-3e8e69345ab9-kube-api-access-n9rg4\") pod \"community-operators-tm6gw\" (UID: \"1e7e0a50-1704-4887-ab61-3e8e69345ab9\") " pod="openshift-marketplace/community-operators-tm6gw" Feb 19 10:24:13 crc kubenswrapper[4811]: I0219 10:24:13.157133 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tm6gw" Feb 19 10:24:13 crc kubenswrapper[4811]: I0219 10:24:13.631514 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tm6gw"] Feb 19 10:24:14 crc kubenswrapper[4811]: I0219 10:24:14.451793 4811 generic.go:334] "Generic (PLEG): container finished" podID="1e7e0a50-1704-4887-ab61-3e8e69345ab9" containerID="b50ae376f87b7953fcd4dce4259c4355d49f3d2757b88fa202101aa04f32feca" exitCode=0 Feb 19 10:24:14 crc kubenswrapper[4811]: I0219 10:24:14.451911 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tm6gw" event={"ID":"1e7e0a50-1704-4887-ab61-3e8e69345ab9","Type":"ContainerDied","Data":"b50ae376f87b7953fcd4dce4259c4355d49f3d2757b88fa202101aa04f32feca"} Feb 19 10:24:14 crc kubenswrapper[4811]: I0219 10:24:14.452350 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tm6gw" event={"ID":"1e7e0a50-1704-4887-ab61-3e8e69345ab9","Type":"ContainerStarted","Data":"b5d1afe9d99c7bc99bfa07b1b52b390e2cb51f1d1b5af72316949deff01bbe24"} Feb 19 10:24:15 crc kubenswrapper[4811]: I0219 10:24:15.461966 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tm6gw" event={"ID":"1e7e0a50-1704-4887-ab61-3e8e69345ab9","Type":"ContainerStarted","Data":"91f2dc39157089b42db0deaea80db28d4aa1bfcb9202457a96d92f84fdf052b1"} Feb 19 10:24:16 crc kubenswrapper[4811]: I0219 10:24:16.473040 4811 generic.go:334] "Generic (PLEG): container finished" podID="1e7e0a50-1704-4887-ab61-3e8e69345ab9" containerID="91f2dc39157089b42db0deaea80db28d4aa1bfcb9202457a96d92f84fdf052b1" exitCode=0 Feb 19 10:24:16 crc kubenswrapper[4811]: I0219 10:24:16.473091 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tm6gw" event={"ID":"1e7e0a50-1704-4887-ab61-3e8e69345ab9","Type":"ContainerDied","Data":"91f2dc39157089b42db0deaea80db28d4aa1bfcb9202457a96d92f84fdf052b1"} Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.159991 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-s42cl"] Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.161345 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-s42cl" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.163521 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-b58rf" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.164592 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-c84zh"] Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.165310 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-c84zh" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.167413 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-drj78" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.177010 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-c84zh"] Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.183312 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-s42cl"] Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.195588 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-dzgff"] Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.196481 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-dzgff" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.197220 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvgph\" (UniqueName: \"kubernetes.io/projected/f7a1ee5c-f40c-4b32-9ff9-46d3a11f64a3-kube-api-access-nvgph\") pod \"cinder-operator-controller-manager-5d946d989d-c84zh\" (UID: \"f7a1ee5c-f40c-4b32-9ff9-46d3a11f64a3\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-c84zh" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.197283 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp5gf\" (UniqueName: \"kubernetes.io/projected/57410614-54e3-49cb-8afe-ee60a861f161-kube-api-access-pp5gf\") pod \"barbican-operator-controller-manager-868647ff47-s42cl\" (UID: \"57410614-54e3-49cb-8afe-ee60a861f161\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-s42cl" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.198323 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-wh6gz"] Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.199180 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-wh6gz" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.199567 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-l8q2j" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.200940 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-c77nm" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.209803 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-wh6gz"] Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.271861 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-dzgff"] Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.298536 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvgph\" (UniqueName: \"kubernetes.io/projected/f7a1ee5c-f40c-4b32-9ff9-46d3a11f64a3-kube-api-access-nvgph\") pod \"cinder-operator-controller-manager-5d946d989d-c84zh\" (UID: \"f7a1ee5c-f40c-4b32-9ff9-46d3a11f64a3\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-c84zh" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.298620 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmtm6\" (UniqueName: \"kubernetes.io/projected/f5e56e20-0181-4f50-a112-b90aeb0ea9f4-kube-api-access-rmtm6\") pod \"designate-operator-controller-manager-6d8bf5c495-dzgff\" (UID: \"f5e56e20-0181-4f50-a112-b90aeb0ea9f4\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-dzgff" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.298652 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjv9f\" (UniqueName: \"kubernetes.io/projected/f39b017a-34d2-491b-a44a-c3fc5ea573b7-kube-api-access-cjv9f\") pod \"glance-operator-controller-manager-77987464f4-wh6gz\" (UID: \"f39b017a-34d2-491b-a44a-c3fc5ea573b7\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-wh6gz" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.298695 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp5gf\" (UniqueName: \"kubernetes.io/projected/57410614-54e3-49cb-8afe-ee60a861f161-kube-api-access-pp5gf\") pod \"barbican-operator-controller-manager-868647ff47-s42cl\" (UID: \"57410614-54e3-49cb-8afe-ee60a861f161\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-s42cl" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.307264 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-9fqp5"] Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.308142 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9fqp5" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.314262 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-9fqp5"] Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.315682 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-jxnhw" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.327326 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cjqjn"] Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.331578 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cjqjn" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.338383 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-jfgv8" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.343097 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp5gf\" (UniqueName: \"kubernetes.io/projected/57410614-54e3-49cb-8afe-ee60a861f161-kube-api-access-pp5gf\") pod \"barbican-operator-controller-manager-868647ff47-s42cl\" (UID: \"57410614-54e3-49cb-8afe-ee60a861f161\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-s42cl" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.355708 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvgph\" (UniqueName: \"kubernetes.io/projected/f7a1ee5c-f40c-4b32-9ff9-46d3a11f64a3-kube-api-access-nvgph\") pod \"cinder-operator-controller-manager-5d946d989d-c84zh\" (UID: \"f7a1ee5c-f40c-4b32-9ff9-46d3a11f64a3\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-c84zh" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.355710 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-bqpcc"] Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.356841 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-bqpcc" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.369704 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-n99n7" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.369889 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.385671 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-bqpcc"] Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.388819 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cjqjn"] Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.399655 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjv9f\" (UniqueName: \"kubernetes.io/projected/f39b017a-34d2-491b-a44a-c3fc5ea573b7-kube-api-access-cjv9f\") pod \"glance-operator-controller-manager-77987464f4-wh6gz\" (UID: \"f39b017a-34d2-491b-a44a-c3fc5ea573b7\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-wh6gz" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.400346 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-d66mq"] Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.401239 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-d66mq" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.402013 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmtm6\" (UniqueName: \"kubernetes.io/projected/f5e56e20-0181-4f50-a112-b90aeb0ea9f4-kube-api-access-rmtm6\") pod \"designate-operator-controller-manager-6d8bf5c495-dzgff\" (UID: \"f5e56e20-0181-4f50-a112-b90aeb0ea9f4\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-dzgff" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.402822 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-4946p" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.419744 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-d66mq"] Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.424499 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-frxgp"] Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.425431 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-frxgp" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.430804 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-79scg" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.435790 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjv9f\" (UniqueName: \"kubernetes.io/projected/f39b017a-34d2-491b-a44a-c3fc5ea573b7-kube-api-access-cjv9f\") pod \"glance-operator-controller-manager-77987464f4-wh6gz\" (UID: \"f39b017a-34d2-491b-a44a-c3fc5ea573b7\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-wh6gz" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.435846 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-f64jn"] Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.436690 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-f64jn" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.443205 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-cpqkm" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.464693 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-fvdc7"] Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.465808 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-fvdc7" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.472378 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmtm6\" (UniqueName: \"kubernetes.io/projected/f5e56e20-0181-4f50-a112-b90aeb0ea9f4-kube-api-access-rmtm6\") pod \"designate-operator-controller-manager-6d8bf5c495-dzgff\" (UID: \"f5e56e20-0181-4f50-a112-b90aeb0ea9f4\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-dzgff" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.477998 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-hgsns" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.488509 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-f64jn"] Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.503590 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc7cm\" (UniqueName: \"kubernetes.io/projected/6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58-kube-api-access-kc7cm\") pod \"infra-operator-controller-manager-79d975b745-bqpcc\" (UID: \"6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bqpcc" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.503631 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlxz8\" (UniqueName: \"kubernetes.io/projected/46b8deed-b9de-4b32-b8e4-c3edc4ad97f9-kube-api-access-nlxz8\") pod \"ironic-operator-controller-manager-554564d7fc-d66mq\" (UID: \"46b8deed-b9de-4b32-b8e4-c3edc4ad97f9\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-d66mq" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.503682 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd55r\" (UniqueName: \"kubernetes.io/projected/5bf4b28e-b5bd-4caa-a07e-b35c20fa16e7-kube-api-access-nd55r\") pod \"heat-operator-controller-manager-69f49c598c-9fqp5\" (UID: \"5bf4b28e-b5bd-4caa-a07e-b35c20fa16e7\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9fqp5" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.503756 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whjb7\" (UniqueName: \"kubernetes.io/projected/3c49a030-0a6a-434c-986e-6957d9d27043-kube-api-access-whjb7\") pod \"horizon-operator-controller-manager-5b9b8895d5-cjqjn\" (UID: \"3c49a030-0a6a-434c-986e-6957d9d27043\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cjqjn" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.503793 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58-cert\") pod \"infra-operator-controller-manager-79d975b745-bqpcc\" (UID: \"6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bqpcc" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.514205 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-frxgp"] Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.527020 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tm6gw" event={"ID":"1e7e0a50-1704-4887-ab61-3e8e69345ab9","Type":"ContainerStarted","Data":"3dcf8fc9b4aa3d3f6e6e98e90f48d966abc8c1e32c8684b5f49f14a51810ca5c"} Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.537295 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rdtd4"] Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.538424 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rdtd4" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.543941 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-nctvd" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.559528 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-fvdc7"] Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.561289 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-s42cl" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.590900 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-2lvw2"] Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.591701 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-2lvw2" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.592045 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-c84zh" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.593157 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-tk8lr" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.595916 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rdtd4"] Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.615158 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-dzgff" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.616018 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whjb7\" (UniqueName: \"kubernetes.io/projected/3c49a030-0a6a-434c-986e-6957d9d27043-kube-api-access-whjb7\") pod \"horizon-operator-controller-manager-5b9b8895d5-cjqjn\" (UID: \"3c49a030-0a6a-434c-986e-6957d9d27043\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cjqjn" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.616085 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58-cert\") pod \"infra-operator-controller-manager-79d975b745-bqpcc\" (UID: \"6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bqpcc" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.616123 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75nmr\" (UniqueName: \"kubernetes.io/projected/6c3aab50-1956-41f1-be69-aa976020f25e-kube-api-access-75nmr\") pod \"nova-operator-controller-manager-567668f5cf-2lvw2\" (UID: \"6c3aab50-1956-41f1-be69-aa976020f25e\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-2lvw2" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.616165 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc7cm\" (UniqueName: \"kubernetes.io/projected/6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58-kube-api-access-kc7cm\") pod \"infra-operator-controller-manager-79d975b745-bqpcc\" (UID: \"6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bqpcc" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.616207 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlxz8\" (UniqueName: \"kubernetes.io/projected/46b8deed-b9de-4b32-b8e4-c3edc4ad97f9-kube-api-access-nlxz8\") pod \"ironic-operator-controller-manager-554564d7fc-d66mq\" (UID: \"46b8deed-b9de-4b32-b8e4-c3edc4ad97f9\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-d66mq" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.616289 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd55r\" (UniqueName: \"kubernetes.io/projected/5bf4b28e-b5bd-4caa-a07e-b35c20fa16e7-kube-api-access-nd55r\") pod \"heat-operator-controller-manager-69f49c598c-9fqp5\" (UID: \"5bf4b28e-b5bd-4caa-a07e-b35c20fa16e7\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9fqp5" Feb 19 10:24:17 crc kubenswrapper[4811]: E0219 10:24:17.616337 4811 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 10:24:17 crc kubenswrapper[4811]: E0219 10:24:17.616420 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58-cert podName:6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58 nodeName:}" failed. No retries permitted until 2026-02-19 10:24:18.116392087 +0000 UTC m=+946.642856773 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58-cert") pod "infra-operator-controller-manager-79d975b745-bqpcc" (UID: "6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58") : secret "infra-operator-webhook-server-cert" not found Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.616358 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8xn4\" (UniqueName: \"kubernetes.io/projected/b5e1b149-0349-4c1c-be95-b9c540da49f1-kube-api-access-d8xn4\") pod \"keystone-operator-controller-manager-b4d948c87-fvdc7\" (UID: \"b5e1b149-0349-4c1c-be95-b9c540da49f1\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-fvdc7" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.616727 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zk5q\" (UniqueName: \"kubernetes.io/projected/fe03e75a-b898-4de2-8569-bce0add53798-kube-api-access-7zk5q\") pod \"mariadb-operator-controller-manager-6994f66f48-frxgp\" (UID: \"fe03e75a-b898-4de2-8569-bce0add53798\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-frxgp" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.616790 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn4kf\" (UniqueName: \"kubernetes.io/projected/8841254d-fb8a-4391-984a-02a39949ca31-kube-api-access-sn4kf\") pod \"manila-operator-controller-manager-54f6768c69-f64jn\" (UID: \"8841254d-fb8a-4391-984a-02a39949ca31\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-f64jn" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.628111 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-wh6gz" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.630301 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-2lvw2"] Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.645101 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-jzwmv"] Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.646358 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-jzwmv" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.655615 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd55r\" (UniqueName: \"kubernetes.io/projected/5bf4b28e-b5bd-4caa-a07e-b35c20fa16e7-kube-api-access-nd55r\") pod \"heat-operator-controller-manager-69f49c598c-9fqp5\" (UID: \"5bf4b28e-b5bd-4caa-a07e-b35c20fa16e7\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9fqp5" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.666207 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc7cm\" (UniqueName: \"kubernetes.io/projected/6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58-kube-api-access-kc7cm\") pod \"infra-operator-controller-manager-79d975b745-bqpcc\" (UID: \"6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bqpcc" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.666319 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlxz8\" (UniqueName: \"kubernetes.io/projected/46b8deed-b9de-4b32-b8e4-c3edc4ad97f9-kube-api-access-nlxz8\") pod \"ironic-operator-controller-manager-554564d7fc-d66mq\" (UID: \"46b8deed-b9de-4b32-b8e4-c3edc4ad97f9\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-d66mq" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.666328 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whjb7\" (UniqueName: \"kubernetes.io/projected/3c49a030-0a6a-434c-986e-6957d9d27043-kube-api-access-whjb7\") pod \"horizon-operator-controller-manager-5b9b8895d5-cjqjn\" (UID: \"3c49a030-0a6a-434c-986e-6957d9d27043\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cjqjn" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.667190 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-7h99b" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.689719 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-jzwmv"] Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.695959 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv"] Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.697303 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.700689 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.700887 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-vdv5b" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.701286 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9fqp5" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.710151 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-h5g6x"] Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.711253 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-h5g6x" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.717614 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-tl5kj" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.719018 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn4kf\" (UniqueName: \"kubernetes.io/projected/8841254d-fb8a-4391-984a-02a39949ca31-kube-api-access-sn4kf\") pod \"manila-operator-controller-manager-54f6768c69-f64jn\" (UID: \"8841254d-fb8a-4391-984a-02a39949ca31\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-f64jn" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.719101 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75nmr\" (UniqueName: \"kubernetes.io/projected/6c3aab50-1956-41f1-be69-aa976020f25e-kube-api-access-75nmr\") pod \"nova-operator-controller-manager-567668f5cf-2lvw2\" (UID: \"6c3aab50-1956-41f1-be69-aa976020f25e\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-2lvw2" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.719150 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm2k8\" (UniqueName: \"kubernetes.io/projected/ed315fb7-c37f-4d65-a048-78c1a354ceb3-kube-api-access-rm2k8\") pod \"ovn-operator-controller-manager-d44cf6b75-h5g6x\" (UID: \"ed315fb7-c37f-4d65-a048-78c1a354ceb3\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-h5g6x" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.719185 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rchm9\" (UniqueName: \"kubernetes.io/projected/34488f24-fe02-449f-a343-288ee29f5ec4-kube-api-access-rchm9\") pod \"octavia-operator-controller-manager-69f8888797-jzwmv\" (UID: \"34488f24-fe02-449f-a343-288ee29f5ec4\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-jzwmv" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.719232 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8xn4\" (UniqueName: \"kubernetes.io/projected/b5e1b149-0349-4c1c-be95-b9c540da49f1-kube-api-access-d8xn4\") pod \"keystone-operator-controller-manager-b4d948c87-fvdc7\" (UID: \"b5e1b149-0349-4c1c-be95-b9c540da49f1\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-fvdc7" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.719265 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd7bs\" (UniqueName: \"kubernetes.io/projected/484c4e4b-0de6-427c-adde-b597fed189a7-kube-api-access-fd7bs\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv\" (UID: \"484c4e4b-0de6-427c-adde-b597fed189a7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.719300 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prq88\" (UniqueName: \"kubernetes.io/projected/03bff346-c835-4fba-9950-59e9e86d8a23-kube-api-access-prq88\") pod \"neutron-operator-controller-manager-64ddbf8bb-rdtd4\" (UID: \"03bff346-c835-4fba-9950-59e9e86d8a23\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rdtd4" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.719326 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zk5q\" (UniqueName: \"kubernetes.io/projected/fe03e75a-b898-4de2-8569-bce0add53798-kube-api-access-7zk5q\") pod \"mariadb-operator-controller-manager-6994f66f48-frxgp\" (UID: \"fe03e75a-b898-4de2-8569-bce0add53798\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-frxgp" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.719352 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/484c4e4b-0de6-427c-adde-b597fed189a7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv\" (UID: \"484c4e4b-0de6-427c-adde-b597fed189a7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.720050 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-4tg5n"] Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.722707 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-4tg5n" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.726151 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-jnsqf" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.742364 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-h5g6x"] Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.795639 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zk5q\" (UniqueName: \"kubernetes.io/projected/fe03e75a-b898-4de2-8569-bce0add53798-kube-api-access-7zk5q\") pod \"mariadb-operator-controller-manager-6994f66f48-frxgp\" (UID: \"fe03e75a-b898-4de2-8569-bce0add53798\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-frxgp" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.795709 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8xn4\" (UniqueName: \"kubernetes.io/projected/b5e1b149-0349-4c1c-be95-b9c540da49f1-kube-api-access-d8xn4\") pod \"keystone-operator-controller-manager-b4d948c87-fvdc7\" (UID: \"b5e1b149-0349-4c1c-be95-b9c540da49f1\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-fvdc7" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.796015 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cjqjn" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.810640 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75nmr\" (UniqueName: \"kubernetes.io/projected/6c3aab50-1956-41f1-be69-aa976020f25e-kube-api-access-75nmr\") pod \"nova-operator-controller-manager-567668f5cf-2lvw2\" (UID: \"6c3aab50-1956-41f1-be69-aa976020f25e\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-2lvw2" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.814017 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tm6gw" podStartSLOduration=3.244265782 podStartE2EDuration="5.813991283s" podCreationTimestamp="2026-02-19 10:24:12 +0000 UTC" firstStartedPulling="2026-02-19 10:24:14.45478489 +0000 UTC m=+942.981249596" lastFinishedPulling="2026-02-19 10:24:17.024510411 +0000 UTC m=+945.550975097" observedRunningTime="2026-02-19 10:24:17.589725355 +0000 UTC m=+946.116190051" watchObservedRunningTime="2026-02-19 10:24:17.813991283 +0000 UTC m=+946.340455969" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.814304 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv"] Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.819745 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn4kf\" (UniqueName: \"kubernetes.io/projected/8841254d-fb8a-4391-984a-02a39949ca31-kube-api-access-sn4kf\") pod \"manila-operator-controller-manager-54f6768c69-f64jn\" (UID: \"8841254d-fb8a-4391-984a-02a39949ca31\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-f64jn" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.822576 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd7bs\" (UniqueName: \"kubernetes.io/projected/484c4e4b-0de6-427c-adde-b597fed189a7-kube-api-access-fd7bs\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv\" (UID: \"484c4e4b-0de6-427c-adde-b597fed189a7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.822632 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prq88\" (UniqueName: \"kubernetes.io/projected/03bff346-c835-4fba-9950-59e9e86d8a23-kube-api-access-prq88\") pod \"neutron-operator-controller-manager-64ddbf8bb-rdtd4\" (UID: \"03bff346-c835-4fba-9950-59e9e86d8a23\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rdtd4" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.822662 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/484c4e4b-0de6-427c-adde-b597fed189a7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv\" (UID: \"484c4e4b-0de6-427c-adde-b597fed189a7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.822753 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2d5r\" (UniqueName: \"kubernetes.io/projected/0dd03f3f-2d9f-433e-942f-ebed08429ccf-kube-api-access-c2d5r\") pod \"placement-operator-controller-manager-8497b45c89-4tg5n\" (UID: \"0dd03f3f-2d9f-433e-942f-ebed08429ccf\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-4tg5n" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.822781 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm2k8\" (UniqueName: \"kubernetes.io/projected/ed315fb7-c37f-4d65-a048-78c1a354ceb3-kube-api-access-rm2k8\") pod \"ovn-operator-controller-manager-d44cf6b75-h5g6x\" (UID: \"ed315fb7-c37f-4d65-a048-78c1a354ceb3\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-h5g6x" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.822808 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rchm9\" (UniqueName: \"kubernetes.io/projected/34488f24-fe02-449f-a343-288ee29f5ec4-kube-api-access-rchm9\") pod \"octavia-operator-controller-manager-69f8888797-jzwmv\" (UID: \"34488f24-fe02-449f-a343-288ee29f5ec4\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-jzwmv" Feb 19 10:24:17 crc kubenswrapper[4811]: E0219 10:24:17.823577 4811 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 10:24:17 crc kubenswrapper[4811]: E0219 10:24:17.823630 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/484c4e4b-0de6-427c-adde-b597fed189a7-cert podName:484c4e4b-0de6-427c-adde-b597fed189a7 nodeName:}" failed. No retries permitted until 2026-02-19 10:24:18.323614442 +0000 UTC m=+946.850079128 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/484c4e4b-0de6-427c-adde-b597fed189a7-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv" (UID: "484c4e4b-0de6-427c-adde-b597fed189a7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.860252 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rchm9\" (UniqueName: \"kubernetes.io/projected/34488f24-fe02-449f-a343-288ee29f5ec4-kube-api-access-rchm9\") pod \"octavia-operator-controller-manager-69f8888797-jzwmv\" (UID: \"34488f24-fe02-449f-a343-288ee29f5ec4\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-jzwmv" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.862255 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-d66mq" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.863205 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd7bs\" (UniqueName: \"kubernetes.io/projected/484c4e4b-0de6-427c-adde-b597fed189a7-kube-api-access-fd7bs\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv\" (UID: \"484c4e4b-0de6-427c-adde-b597fed189a7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.868959 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prq88\" (UniqueName: \"kubernetes.io/projected/03bff346-c835-4fba-9950-59e9e86d8a23-kube-api-access-prq88\") pod \"neutron-operator-controller-manager-64ddbf8bb-rdtd4\" (UID: \"03bff346-c835-4fba-9950-59e9e86d8a23\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rdtd4" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.872332 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm2k8\" (UniqueName: \"kubernetes.io/projected/ed315fb7-c37f-4d65-a048-78c1a354ceb3-kube-api-access-rm2k8\") pod \"ovn-operator-controller-manager-d44cf6b75-h5g6x\" (UID: \"ed315fb7-c37f-4d65-a048-78c1a354ceb3\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-h5g6x" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.887881 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-6f8zw"] Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.889581 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6f8zw" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.891730 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-frxgp" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.898759 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-4tg5n"] Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.903121 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-rzskk" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.903139 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-456kj"] Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.910285 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-456kj" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.913330 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-zcrvn" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.929096 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-f64jn" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.929281 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvj88\" (UniqueName: \"kubernetes.io/projected/058e8da6-f607-4153-bb9e-af486f6c87e7-kube-api-access-pvj88\") pod \"swift-operator-controller-manager-68f46476f-6f8zw\" (UID: \"058e8da6-f607-4153-bb9e-af486f6c87e7\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-6f8zw" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.929417 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2d5r\" (UniqueName: \"kubernetes.io/projected/0dd03f3f-2d9f-433e-942f-ebed08429ccf-kube-api-access-c2d5r\") pod \"placement-operator-controller-manager-8497b45c89-4tg5n\" (UID: \"0dd03f3f-2d9f-433e-942f-ebed08429ccf\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-4tg5n" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.929522 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnlwp\" (UniqueName: \"kubernetes.io/projected/eab9f83b-209d-4688-8c7f-07840bae01ff-kube-api-access-jnlwp\") pod \"telemetry-operator-controller-manager-7f45b4ff68-456kj\" (UID: \"eab9f83b-209d-4688-8c7f-07840bae01ff\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-456kj" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.937400 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-6f8zw"] Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.960192 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-456kj"] Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.968422 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-fvdc7" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.970042 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2d5r\" (UniqueName: \"kubernetes.io/projected/0dd03f3f-2d9f-433e-942f-ebed08429ccf-kube-api-access-c2d5r\") pod \"placement-operator-controller-manager-8497b45c89-4tg5n\" (UID: \"0dd03f3f-2d9f-433e-942f-ebed08429ccf\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-4tg5n" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.977370 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-8xp7b"] Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.978330 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-8xp7b" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.985036 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-zkk6t" Feb 19 10:24:17 crc kubenswrapper[4811]: I0219 10:24:17.985830 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-8xp7b"] Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.003575 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rdtd4" Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.005621 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-2lvw2" Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.019843 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-jzwmv" Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.020635 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-czvjr"] Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.022032 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-czvjr" Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.025664 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-9z2jf" Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.030782 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvj88\" (UniqueName: \"kubernetes.io/projected/058e8da6-f607-4153-bb9e-af486f6c87e7-kube-api-access-pvj88\") pod \"swift-operator-controller-manager-68f46476f-6f8zw\" (UID: \"058e8da6-f607-4153-bb9e-af486f6c87e7\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-6f8zw" Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.030859 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dth6w\" (UniqueName: \"kubernetes.io/projected/75dd64a6-abdc-4d68-b9ca-abe41484b894-kube-api-access-dth6w\") pod \"test-operator-controller-manager-7866795846-8xp7b\" (UID: \"75dd64a6-abdc-4d68-b9ca-abe41484b894\") " pod="openstack-operators/test-operator-controller-manager-7866795846-8xp7b" Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.030942 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmvpj\" (UniqueName: \"kubernetes.io/projected/b4dc2b86-5147-4420-a9ef-a2ef94991ed3-kube-api-access-xmvpj\") pod \"watcher-operator-controller-manager-5db88f68c-czvjr\" (UID: \"b4dc2b86-5147-4420-a9ef-a2ef94991ed3\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-czvjr" Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.030992 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnlwp\" (UniqueName: \"kubernetes.io/projected/eab9f83b-209d-4688-8c7f-07840bae01ff-kube-api-access-jnlwp\") pod \"telemetry-operator-controller-manager-7f45b4ff68-456kj\" (UID: \"eab9f83b-209d-4688-8c7f-07840bae01ff\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-456kj" Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.038616 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-czvjr"] Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.061324 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvj88\" (UniqueName: \"kubernetes.io/projected/058e8da6-f607-4153-bb9e-af486f6c87e7-kube-api-access-pvj88\") pod \"swift-operator-controller-manager-68f46476f-6f8zw\" (UID: \"058e8da6-f607-4153-bb9e-af486f6c87e7\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-6f8zw" Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.064022 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnlwp\" (UniqueName: \"kubernetes.io/projected/eab9f83b-209d-4688-8c7f-07840bae01ff-kube-api-access-jnlwp\") pod \"telemetry-operator-controller-manager-7f45b4ff68-456kj\" (UID: \"eab9f83b-209d-4688-8c7f-07840bae01ff\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-456kj" Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.082564 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7b558f465f-z4vhq"] Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.083392 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7b558f465f-z4vhq" Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.089976 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-5vrfs" Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.092594 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.092617 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.102689 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7b558f465f-z4vhq"] Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.130334 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5vw7t"] Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.136001 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dth6w\" (UniqueName: \"kubernetes.io/projected/75dd64a6-abdc-4d68-b9ca-abe41484b894-kube-api-access-dth6w\") pod \"test-operator-controller-manager-7866795846-8xp7b\" (UID: \"75dd64a6-abdc-4d68-b9ca-abe41484b894\") " pod="openstack-operators/test-operator-controller-manager-7866795846-8xp7b" Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.136273 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58-cert\") pod \"infra-operator-controller-manager-79d975b745-bqpcc\" (UID: \"6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bqpcc" Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.136359 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmvpj\" (UniqueName: \"kubernetes.io/projected/b4dc2b86-5147-4420-a9ef-a2ef94991ed3-kube-api-access-xmvpj\") pod \"watcher-operator-controller-manager-5db88f68c-czvjr\" (UID: \"b4dc2b86-5147-4420-a9ef-a2ef94991ed3\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-czvjr" Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.136836 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5vw7t" Feb 19 10:24:18 crc kubenswrapper[4811]: E0219 10:24:18.143178 4811 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 10:24:18 crc kubenswrapper[4811]: E0219 10:24:18.143331 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58-cert podName:6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58 nodeName:}" failed. No retries permitted until 2026-02-19 10:24:19.143314567 +0000 UTC m=+947.669779253 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58-cert") pod "infra-operator-controller-manager-79d975b745-bqpcc" (UID: "6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58") : secret "infra-operator-webhook-server-cert" not found Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.144370 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5vw7t"] Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.145994 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-h5g6x" Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.149638 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-lvzvh" Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.162057 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmvpj\" (UniqueName: \"kubernetes.io/projected/b4dc2b86-5147-4420-a9ef-a2ef94991ed3-kube-api-access-xmvpj\") pod \"watcher-operator-controller-manager-5db88f68c-czvjr\" (UID: \"b4dc2b86-5147-4420-a9ef-a2ef94991ed3\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-czvjr" Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.162156 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dth6w\" (UniqueName: \"kubernetes.io/projected/75dd64a6-abdc-4d68-b9ca-abe41484b894-kube-api-access-dth6w\") pod \"test-operator-controller-manager-7866795846-8xp7b\" (UID: \"75dd64a6-abdc-4d68-b9ca-abe41484b894\") " pod="openstack-operators/test-operator-controller-manager-7866795846-8xp7b" Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.163813 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-4tg5n" Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.240284 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-webhook-certs\") pod \"openstack-operator-controller-manager-7b558f465f-z4vhq\" (UID: \"7f4a5782-be5f-4a0d-be5d-b8a5014c4711\") " pod="openstack-operators/openstack-operator-controller-manager-7b558f465f-z4vhq" Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.240436 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-metrics-certs\") pod \"openstack-operator-controller-manager-7b558f465f-z4vhq\" (UID: \"7f4a5782-be5f-4a0d-be5d-b8a5014c4711\") " pod="openstack-operators/openstack-operator-controller-manager-7b558f465f-z4vhq" Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.240475 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzhwf\" (UniqueName: \"kubernetes.io/projected/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-kube-api-access-kzhwf\") pod \"openstack-operator-controller-manager-7b558f465f-z4vhq\" (UID: \"7f4a5782-be5f-4a0d-be5d-b8a5014c4711\") " pod="openstack-operators/openstack-operator-controller-manager-7b558f465f-z4vhq" Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.240534 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtn88\" (UniqueName: \"kubernetes.io/projected/9def098d-8e58-4145-a851-134a7b2620b5-kube-api-access-qtn88\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5vw7t\" (UID: \"9def098d-8e58-4145-a851-134a7b2620b5\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5vw7t" Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.248142 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6f8zw" Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.277644 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-456kj" Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.291499 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-wh6gz"] Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.342005 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-8xp7b" Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.342384 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/484c4e4b-0de6-427c-adde-b597fed189a7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv\" (UID: \"484c4e4b-0de6-427c-adde-b597fed189a7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv" Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.343496 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-metrics-certs\") pod \"openstack-operator-controller-manager-7b558f465f-z4vhq\" (UID: \"7f4a5782-be5f-4a0d-be5d-b8a5014c4711\") " pod="openstack-operators/openstack-operator-controller-manager-7b558f465f-z4vhq" Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.343524 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzhwf\" (UniqueName: \"kubernetes.io/projected/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-kube-api-access-kzhwf\") pod \"openstack-operator-controller-manager-7b558f465f-z4vhq\" (UID: \"7f4a5782-be5f-4a0d-be5d-b8a5014c4711\") " pod="openstack-operators/openstack-operator-controller-manager-7b558f465f-z4vhq" Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.343681 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtn88\" (UniqueName: \"kubernetes.io/projected/9def098d-8e58-4145-a851-134a7b2620b5-kube-api-access-qtn88\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5vw7t\" (UID: \"9def098d-8e58-4145-a851-134a7b2620b5\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5vw7t" Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.343735 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-webhook-certs\") pod \"openstack-operator-controller-manager-7b558f465f-z4vhq\" (UID: \"7f4a5782-be5f-4a0d-be5d-b8a5014c4711\") " pod="openstack-operators/openstack-operator-controller-manager-7b558f465f-z4vhq" Feb 19 10:24:18 crc kubenswrapper[4811]: E0219 10:24:18.342497 4811 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 10:24:18 crc kubenswrapper[4811]: E0219 10:24:18.343988 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/484c4e4b-0de6-427c-adde-b597fed189a7-cert podName:484c4e4b-0de6-427c-adde-b597fed189a7 nodeName:}" failed. No retries permitted until 2026-02-19 10:24:19.343960812 +0000 UTC m=+947.870425498 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/484c4e4b-0de6-427c-adde-b597fed189a7-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv" (UID: "484c4e4b-0de6-427c-adde-b597fed189a7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 10:24:18 crc kubenswrapper[4811]: E0219 10:24:18.344490 4811 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 10:24:18 crc kubenswrapper[4811]: E0219 10:24:18.344541 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-metrics-certs podName:7f4a5782-be5f-4a0d-be5d-b8a5014c4711 nodeName:}" failed. No retries permitted until 2026-02-19 10:24:18.844513867 +0000 UTC m=+947.370978553 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-metrics-certs") pod "openstack-operator-controller-manager-7b558f465f-z4vhq" (UID: "7f4a5782-be5f-4a0d-be5d-b8a5014c4711") : secret "metrics-server-cert" not found Feb 19 10:24:18 crc kubenswrapper[4811]: E0219 10:24:18.343913 4811 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 10:24:18 crc kubenswrapper[4811]: E0219 10:24:18.344926 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-webhook-certs podName:7f4a5782-be5f-4a0d-be5d-b8a5014c4711 nodeName:}" failed. No retries permitted until 2026-02-19 10:24:18.844917327 +0000 UTC m=+947.371382013 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-webhook-certs") pod "openstack-operator-controller-manager-7b558f465f-z4vhq" (UID: "7f4a5782-be5f-4a0d-be5d-b8a5014c4711") : secret "webhook-server-cert" not found Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.369643 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-c84zh"] Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.370883 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-czvjr" Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.387065 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzhwf\" (UniqueName: \"kubernetes.io/projected/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-kube-api-access-kzhwf\") pod \"openstack-operator-controller-manager-7b558f465f-z4vhq\" (UID: \"7f4a5782-be5f-4a0d-be5d-b8a5014c4711\") " pod="openstack-operators/openstack-operator-controller-manager-7b558f465f-z4vhq" Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.388694 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-dzgff"] Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.391721 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtn88\" (UniqueName: \"kubernetes.io/projected/9def098d-8e58-4145-a851-134a7b2620b5-kube-api-access-qtn88\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5vw7t\" (UID: \"9def098d-8e58-4145-a851-134a7b2620b5\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5vw7t" Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.405752 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-s42cl"] Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.536413 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-9fqp5"] Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.536799 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-dzgff" event={"ID":"f5e56e20-0181-4f50-a112-b90aeb0ea9f4","Type":"ContainerStarted","Data":"7bd6f3ba547dd98607dda296a14fb6c4cfcf13237846d2c418eda175b8c518dc"} Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.536822 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-wh6gz" event={"ID":"f39b017a-34d2-491b-a44a-c3fc5ea573b7","Type":"ContainerStarted","Data":"2988a9e786432e8d12ba90278a6716b210bf4e59c3435493f83aa7ce9c0a6d73"} Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.542883 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-c84zh" event={"ID":"f7a1ee5c-f40c-4b32-9ff9-46d3a11f64a3","Type":"ContainerStarted","Data":"42ec39ad215d5bec6a2448761c40c5c5085f3214b0f9411e7d43d1667cb0571a"} Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.544211 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5vw7t" Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.546413 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-s42cl" event={"ID":"57410614-54e3-49cb-8afe-ee60a861f161","Type":"ContainerStarted","Data":"80b17d0048c9c8b72d21635afb1087125ac344910714720c13c5fc6f7f15ead2"} Feb 19 10:24:18 crc kubenswrapper[4811]: W0219 10:24:18.591529 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bf4b28e_b5bd_4caa_a07e_b35c20fa16e7.slice/crio-3313ab5004dd98af7864dc172d0dbebae11add37871f8f4545b84e56937a4a80 WatchSource:0}: Error finding container 3313ab5004dd98af7864dc172d0dbebae11add37871f8f4545b84e56937a4a80: Status 404 returned error can't find the container with id 3313ab5004dd98af7864dc172d0dbebae11add37871f8f4545b84e56937a4a80 Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.815738 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-fvdc7"] Feb 19 10:24:18 crc kubenswrapper[4811]: W0219 10:24:18.827265 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5e1b149_0349_4c1c_be95_b9c540da49f1.slice/crio-a63ae6274abca4568e93bac802444fbe3a82df4780f608d141bfdf62e4d444a7 WatchSource:0}: Error finding container a63ae6274abca4568e93bac802444fbe3a82df4780f608d141bfdf62e4d444a7: Status 404 returned error can't find the container with id a63ae6274abca4568e93bac802444fbe3a82df4780f608d141bfdf62e4d444a7 Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.852385 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-webhook-certs\") pod \"openstack-operator-controller-manager-7b558f465f-z4vhq\" (UID: \"7f4a5782-be5f-4a0d-be5d-b8a5014c4711\") " pod="openstack-operators/openstack-operator-controller-manager-7b558f465f-z4vhq" Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.852532 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-metrics-certs\") pod \"openstack-operator-controller-manager-7b558f465f-z4vhq\" (UID: \"7f4a5782-be5f-4a0d-be5d-b8a5014c4711\") " pod="openstack-operators/openstack-operator-controller-manager-7b558f465f-z4vhq" Feb 19 10:24:18 crc kubenswrapper[4811]: E0219 10:24:18.852734 4811 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 10:24:18 crc kubenswrapper[4811]: E0219 10:24:18.852859 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-metrics-certs podName:7f4a5782-be5f-4a0d-be5d-b8a5014c4711 nodeName:}" failed. No retries permitted until 2026-02-19 10:24:19.852797144 +0000 UTC m=+948.379261830 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-metrics-certs") pod "openstack-operator-controller-manager-7b558f465f-z4vhq" (UID: "7f4a5782-be5f-4a0d-be5d-b8a5014c4711") : secret "metrics-server-cert" not found Feb 19 10:24:18 crc kubenswrapper[4811]: E0219 10:24:18.853470 4811 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 10:24:18 crc kubenswrapper[4811]: E0219 10:24:18.853514 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-webhook-certs podName:7f4a5782-be5f-4a0d-be5d-b8a5014c4711 nodeName:}" failed. No retries permitted until 2026-02-19 10:24:19.853505492 +0000 UTC m=+948.379970178 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-webhook-certs") pod "openstack-operator-controller-manager-7b558f465f-z4vhq" (UID: "7f4a5782-be5f-4a0d-be5d-b8a5014c4711") : secret "webhook-server-cert" not found Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.854216 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-d66mq"] Feb 19 10:24:18 crc kubenswrapper[4811]: I0219 10:24:18.877700 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cjqjn"] Feb 19 10:24:19 crc kubenswrapper[4811]: I0219 10:24:19.159588 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58-cert\") pod \"infra-operator-controller-manager-79d975b745-bqpcc\" (UID: \"6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bqpcc" Feb 19 10:24:19 crc kubenswrapper[4811]: E0219 10:24:19.159798 4811 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 10:24:19 crc kubenswrapper[4811]: E0219 10:24:19.159841 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58-cert podName:6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58 nodeName:}" failed. No retries permitted until 2026-02-19 10:24:21.15982716 +0000 UTC m=+949.686291846 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58-cert") pod "infra-operator-controller-manager-79d975b745-bqpcc" (UID: "6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58") : secret "infra-operator-webhook-server-cert" not found Feb 19 10:24:19 crc kubenswrapper[4811]: I0219 10:24:19.251308 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-frxgp"] Feb 19 10:24:19 crc kubenswrapper[4811]: I0219 10:24:19.267523 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rdtd4"] Feb 19 10:24:19 crc kubenswrapper[4811]: I0219 10:24:19.276297 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-6f8zw"] Feb 19 10:24:19 crc kubenswrapper[4811]: I0219 10:24:19.314763 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-h5g6x"] Feb 19 10:24:19 crc kubenswrapper[4811]: I0219 10:24:19.319832 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-f64jn"] Feb 19 10:24:19 crc kubenswrapper[4811]: I0219 10:24:19.332721 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-jzwmv"] Feb 19 10:24:19 crc kubenswrapper[4811]: I0219 10:24:19.338774 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-8xp7b"] Feb 19 10:24:19 crc kubenswrapper[4811]: I0219 10:24:19.344439 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-456kj"] Feb 19 10:24:19 crc kubenswrapper[4811]: W0219 10:24:19.348699 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8841254d_fb8a_4391_984a_02a39949ca31.slice/crio-ccc8608a80bc5e91aa191c39615d987477913a5c721258b42a6832b985b4e0b8 WatchSource:0}: Error finding container ccc8608a80bc5e91aa191c39615d987477913a5c721258b42a6832b985b4e0b8: Status 404 returned error can't find the container with id ccc8608a80bc5e91aa191c39615d987477913a5c721258b42a6832b985b4e0b8 Feb 19 10:24:19 crc kubenswrapper[4811]: I0219 10:24:19.352609 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-2lvw2"] Feb 19 10:24:19 crc kubenswrapper[4811]: I0219 10:24:19.359145 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-czvjr"] Feb 19 10:24:19 crc kubenswrapper[4811]: E0219 10:24:19.361176 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sn4kf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-54f6768c69-f64jn_openstack-operators(8841254d-fb8a-4391-984a-02a39949ca31): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 10:24:19 crc kubenswrapper[4811]: E0219 10:24:19.362345 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-f64jn" podUID="8841254d-fb8a-4391-984a-02a39949ca31" Feb 19 10:24:19 crc kubenswrapper[4811]: I0219 10:24:19.364770 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5vw7t"] Feb 19 10:24:19 crc kubenswrapper[4811]: I0219 10:24:19.367649 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/484c4e4b-0de6-427c-adde-b597fed189a7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv\" (UID: \"484c4e4b-0de6-427c-adde-b597fed189a7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv" Feb 19 10:24:19 crc kubenswrapper[4811]: E0219 10:24:19.367801 4811 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 10:24:19 crc kubenswrapper[4811]: E0219 10:24:19.367856 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/484c4e4b-0de6-427c-adde-b597fed189a7-cert podName:484c4e4b-0de6-427c-adde-b597fed189a7 nodeName:}" failed. No retries permitted until 2026-02-19 10:24:21.367837597 +0000 UTC m=+949.894302283 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/484c4e4b-0de6-427c-adde-b597fed189a7-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv" (UID: "484c4e4b-0de6-427c-adde-b597fed189a7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 10:24:19 crc kubenswrapper[4811]: E0219 10:24:19.369887 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rchm9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-69f8888797-jzwmv_openstack-operators(34488f24-fe02-449f-a343-288ee29f5ec4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 10:24:19 crc kubenswrapper[4811]: E0219 10:24:19.371144 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-jzwmv" podUID="34488f24-fe02-449f-a343-288ee29f5ec4" Feb 19 10:24:19 crc kubenswrapper[4811]: I0219 10:24:19.371307 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-4tg5n"] Feb 19 10:24:19 crc kubenswrapper[4811]: E0219 10:24:19.373338 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xmvpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-czvjr_openstack-operators(b4dc2b86-5147-4420-a9ef-a2ef94991ed3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 10:24:19 crc kubenswrapper[4811]: E0219 10:24:19.374844 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-czvjr" podUID="b4dc2b86-5147-4420-a9ef-a2ef94991ed3" Feb 19 10:24:19 crc kubenswrapper[4811]: E0219 10:24:19.383734 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c2d5r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-4tg5n_openstack-operators(0dd03f3f-2d9f-433e-942f-ebed08429ccf): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 10:24:19 crc kubenswrapper[4811]: E0219 10:24:19.385925 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-4tg5n" podUID="0dd03f3f-2d9f-433e-942f-ebed08429ccf" Feb 19 10:24:19 crc kubenswrapper[4811]: W0219 10:24:19.401073 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9def098d_8e58_4145_a851_134a7b2620b5.slice/crio-a296e50eeda8d90909a75fae7de8cb365e87a2fc4ced9ef24540c65c7ad18142 WatchSource:0}: Error finding container a296e50eeda8d90909a75fae7de8cb365e87a2fc4ced9ef24540c65c7ad18142: Status 404 returned error can't find the container with id a296e50eeda8d90909a75fae7de8cb365e87a2fc4ced9ef24540c65c7ad18142 Feb 19 10:24:19 crc kubenswrapper[4811]: E0219 10:24:19.421970 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qtn88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-5vw7t_openstack-operators(9def098d-8e58-4145-a851-134a7b2620b5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 10:24:19 crc kubenswrapper[4811]: E0219 10:24:19.423202 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5vw7t" podUID="9def098d-8e58-4145-a851-134a7b2620b5" Feb 19 10:24:19 crc kubenswrapper[4811]: I0219 10:24:19.588960 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-frxgp" event={"ID":"fe03e75a-b898-4de2-8569-bce0add53798","Type":"ContainerStarted","Data":"9350e711937ed07f81d28702675f2f326446bcf236b192cfdac001773b22b51f"} Feb 19 10:24:19 crc kubenswrapper[4811]: I0219 10:24:19.604028 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-h5g6x" event={"ID":"ed315fb7-c37f-4d65-a048-78c1a354ceb3","Type":"ContainerStarted","Data":"561c38e0078e3f719c54e0c64b8c71d0287b5dd6a136e9ff444dad266aee3ccc"} Feb 19 10:24:19 crc kubenswrapper[4811]: I0219 10:24:19.605997 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-f64jn" event={"ID":"8841254d-fb8a-4391-984a-02a39949ca31","Type":"ContainerStarted","Data":"ccc8608a80bc5e91aa191c39615d987477913a5c721258b42a6832b985b4e0b8"} Feb 19 10:24:19 crc kubenswrapper[4811]: E0219 10:24:19.616424 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-f64jn" podUID="8841254d-fb8a-4391-984a-02a39949ca31" Feb 19 10:24:19 crc kubenswrapper[4811]: I0219 10:24:19.620670 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-4tg5n" event={"ID":"0dd03f3f-2d9f-433e-942f-ebed08429ccf","Type":"ContainerStarted","Data":"eb85c81dde88b16aabe57471a42f6d44c45121e8b572f5b235da8b999718a896"} Feb 19 10:24:19 crc kubenswrapper[4811]: I0219 10:24:19.627856 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-czvjr" event={"ID":"b4dc2b86-5147-4420-a9ef-a2ef94991ed3","Type":"ContainerStarted","Data":"3218ae7b329698283cdb672adf6d4e90845a59e8672efda0afcabfd4971e9889"} Feb 19 10:24:19 crc kubenswrapper[4811]: E0219 10:24:19.633687 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-4tg5n" podUID="0dd03f3f-2d9f-433e-942f-ebed08429ccf" Feb 19 10:24:19 crc kubenswrapper[4811]: E0219 10:24:19.633975 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-czvjr" podUID="b4dc2b86-5147-4420-a9ef-a2ef94991ed3" Feb 19 10:24:19 crc kubenswrapper[4811]: I0219 10:24:19.643949 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-d66mq" event={"ID":"46b8deed-b9de-4b32-b8e4-c3edc4ad97f9","Type":"ContainerStarted","Data":"a79c9adaec8d3b3668cc29fb778d6916befec1f670e516a0b827c8fd76eefead"} Feb 19 10:24:19 crc kubenswrapper[4811]: I0219 10:24:19.660522 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6f8zw" event={"ID":"058e8da6-f607-4153-bb9e-af486f6c87e7","Type":"ContainerStarted","Data":"e57a7a96ad89749e2bd98ce39648dc9cf8084d55bce82a18c62d585fe5457576"} Feb 19 10:24:19 crc kubenswrapper[4811]: I0219 10:24:19.674800 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5vw7t" event={"ID":"9def098d-8e58-4145-a851-134a7b2620b5","Type":"ContainerStarted","Data":"a296e50eeda8d90909a75fae7de8cb365e87a2fc4ced9ef24540c65c7ad18142"} Feb 19 10:24:19 crc kubenswrapper[4811]: E0219 10:24:19.680728 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5vw7t" podUID="9def098d-8e58-4145-a851-134a7b2620b5" Feb 19 10:24:19 crc kubenswrapper[4811]: I0219 10:24:19.682102 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-2lvw2" event={"ID":"6c3aab50-1956-41f1-be69-aa976020f25e","Type":"ContainerStarted","Data":"0efd0a1c9e3c060e17eb4baa37c7b983db66cb91dfe28b326bf4c3f1a69a5a43"} Feb 19 10:24:19 crc kubenswrapper[4811]: I0219 10:24:19.685361 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rdtd4" event={"ID":"03bff346-c835-4fba-9950-59e9e86d8a23","Type":"ContainerStarted","Data":"59a8ed75f4ca119e5ec3e2357051c7716d01bb4392f633d74fb8968487a498b2"} Feb 19 10:24:19 crc kubenswrapper[4811]: I0219 10:24:19.688792 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-jzwmv" event={"ID":"34488f24-fe02-449f-a343-288ee29f5ec4","Type":"ContainerStarted","Data":"a75113276ccd0babef81b0e9fd326a1eb627e6e6d62cc9eb60c19034f65ec1dc"} Feb 19 10:24:19 crc kubenswrapper[4811]: E0219 10:24:19.691274 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-jzwmv" podUID="34488f24-fe02-449f-a343-288ee29f5ec4" Feb 19 10:24:19 crc kubenswrapper[4811]: I0219 10:24:19.693686 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-fvdc7" event={"ID":"b5e1b149-0349-4c1c-be95-b9c540da49f1","Type":"ContainerStarted","Data":"a63ae6274abca4568e93bac802444fbe3a82df4780f608d141bfdf62e4d444a7"} Feb 19 10:24:19 crc kubenswrapper[4811]: I0219 10:24:19.696250 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-456kj" event={"ID":"eab9f83b-209d-4688-8c7f-07840bae01ff","Type":"ContainerStarted","Data":"5aefd1960e98aa855dc33af8910af309648ddde8269927104af9a51e51523200"} Feb 19 10:24:19 crc kubenswrapper[4811]: I0219 10:24:19.725094 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-8xp7b" event={"ID":"75dd64a6-abdc-4d68-b9ca-abe41484b894","Type":"ContainerStarted","Data":"a2b8104a65e8f300b87ab05384544f0817f1bbcee6d25113f03503f8e0f03687"} Feb 19 10:24:19 crc kubenswrapper[4811]: I0219 10:24:19.733684 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9fqp5" event={"ID":"5bf4b28e-b5bd-4caa-a07e-b35c20fa16e7","Type":"ContainerStarted","Data":"3313ab5004dd98af7864dc172d0dbebae11add37871f8f4545b84e56937a4a80"} Feb 19 10:24:19 crc kubenswrapper[4811]: I0219 10:24:19.741847 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cjqjn" event={"ID":"3c49a030-0a6a-434c-986e-6957d9d27043","Type":"ContainerStarted","Data":"8a85d44caf9b83f3739460c870d67249521c969306d4ba454c8f476ff23de784"} Feb 19 10:24:19 crc kubenswrapper[4811]: I0219 10:24:19.876862 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-metrics-certs\") pod \"openstack-operator-controller-manager-7b558f465f-z4vhq\" (UID: \"7f4a5782-be5f-4a0d-be5d-b8a5014c4711\") " pod="openstack-operators/openstack-operator-controller-manager-7b558f465f-z4vhq" Feb 19 10:24:19 crc kubenswrapper[4811]: I0219 10:24:19.876966 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-webhook-certs\") pod \"openstack-operator-controller-manager-7b558f465f-z4vhq\" (UID: \"7f4a5782-be5f-4a0d-be5d-b8a5014c4711\") " pod="openstack-operators/openstack-operator-controller-manager-7b558f465f-z4vhq" Feb 19 10:24:19 crc kubenswrapper[4811]: E0219 10:24:19.877097 4811 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 10:24:19 crc kubenswrapper[4811]: E0219 10:24:19.877146 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-webhook-certs podName:7f4a5782-be5f-4a0d-be5d-b8a5014c4711 nodeName:}" failed. No retries permitted until 2026-02-19 10:24:21.87713083 +0000 UTC m=+950.403595506 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-webhook-certs") pod "openstack-operator-controller-manager-7b558f465f-z4vhq" (UID: "7f4a5782-be5f-4a0d-be5d-b8a5014c4711") : secret "webhook-server-cert" not found Feb 19 10:24:19 crc kubenswrapper[4811]: E0219 10:24:19.877523 4811 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 10:24:19 crc kubenswrapper[4811]: E0219 10:24:19.877548 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-metrics-certs podName:7f4a5782-be5f-4a0d-be5d-b8a5014c4711 nodeName:}" failed. No retries permitted until 2026-02-19 10:24:21.877540801 +0000 UTC m=+950.404005487 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-metrics-certs") pod "openstack-operator-controller-manager-7b558f465f-z4vhq" (UID: "7f4a5782-be5f-4a0d-be5d-b8a5014c4711") : secret "metrics-server-cert" not found Feb 19 10:24:20 crc kubenswrapper[4811]: E0219 10:24:20.761008 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5vw7t" podUID="9def098d-8e58-4145-a851-134a7b2620b5" Feb 19 10:24:20 crc kubenswrapper[4811]: E0219 10:24:20.761317 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-jzwmv" podUID="34488f24-fe02-449f-a343-288ee29f5ec4" Feb 19 10:24:20 crc kubenswrapper[4811]: E0219 10:24:20.761370 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-4tg5n" podUID="0dd03f3f-2d9f-433e-942f-ebed08429ccf" Feb 19 10:24:20 crc kubenswrapper[4811]: E0219 10:24:20.761419 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-f64jn" podUID="8841254d-fb8a-4391-984a-02a39949ca31" Feb 19 10:24:20 crc kubenswrapper[4811]: E0219 10:24:20.761610 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-czvjr" podUID="b4dc2b86-5147-4420-a9ef-a2ef94991ed3" Feb 19 10:24:21 crc kubenswrapper[4811]: I0219 10:24:21.203241 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58-cert\") pod \"infra-operator-controller-manager-79d975b745-bqpcc\" (UID: \"6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bqpcc" Feb 19 10:24:21 crc kubenswrapper[4811]: E0219 10:24:21.203463 4811 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 10:24:21 crc kubenswrapper[4811]: E0219 10:24:21.203515 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58-cert podName:6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58 nodeName:}" failed. No retries permitted until 2026-02-19 10:24:25.203498893 +0000 UTC m=+953.729963579 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58-cert") pod "infra-operator-controller-manager-79d975b745-bqpcc" (UID: "6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58") : secret "infra-operator-webhook-server-cert" not found Feb 19 10:24:21 crc kubenswrapper[4811]: I0219 10:24:21.407209 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/484c4e4b-0de6-427c-adde-b597fed189a7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv\" (UID: \"484c4e4b-0de6-427c-adde-b597fed189a7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv" Feb 19 10:24:21 crc kubenswrapper[4811]: E0219 10:24:21.407698 4811 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 10:24:21 crc kubenswrapper[4811]: E0219 10:24:21.407763 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/484c4e4b-0de6-427c-adde-b597fed189a7-cert podName:484c4e4b-0de6-427c-adde-b597fed189a7 nodeName:}" failed. No retries permitted until 2026-02-19 10:24:25.407743011 +0000 UTC m=+953.934207697 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/484c4e4b-0de6-427c-adde-b597fed189a7-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv" (UID: "484c4e4b-0de6-427c-adde-b597fed189a7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 10:24:21 crc kubenswrapper[4811]: E0219 10:24:21.917923 4811 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 10:24:21 crc kubenswrapper[4811]: E0219 10:24:21.918043 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-webhook-certs podName:7f4a5782-be5f-4a0d-be5d-b8a5014c4711 nodeName:}" failed. No retries permitted until 2026-02-19 10:24:25.91802129 +0000 UTC m=+954.444485976 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-webhook-certs") pod "openstack-operator-controller-manager-7b558f465f-z4vhq" (UID: "7f4a5782-be5f-4a0d-be5d-b8a5014c4711") : secret "webhook-server-cert" not found Feb 19 10:24:21 crc kubenswrapper[4811]: I0219 10:24:21.917734 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-webhook-certs\") pod \"openstack-operator-controller-manager-7b558f465f-z4vhq\" (UID: \"7f4a5782-be5f-4a0d-be5d-b8a5014c4711\") " pod="openstack-operators/openstack-operator-controller-manager-7b558f465f-z4vhq" Feb 19 10:24:21 crc kubenswrapper[4811]: I0219 10:24:21.918722 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-metrics-certs\") pod \"openstack-operator-controller-manager-7b558f465f-z4vhq\" (UID: \"7f4a5782-be5f-4a0d-be5d-b8a5014c4711\") " pod="openstack-operators/openstack-operator-controller-manager-7b558f465f-z4vhq" Feb 19 10:24:21 crc kubenswrapper[4811]: E0219 10:24:21.918842 4811 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 10:24:21 crc kubenswrapper[4811]: E0219 10:24:21.918881 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-metrics-certs podName:7f4a5782-be5f-4a0d-be5d-b8a5014c4711 nodeName:}" failed. No retries permitted until 2026-02-19 10:24:25.918870962 +0000 UTC m=+954.445335648 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-metrics-certs") pod "openstack-operator-controller-manager-7b558f465f-z4vhq" (UID: "7f4a5782-be5f-4a0d-be5d-b8a5014c4711") : secret "metrics-server-cert" not found Feb 19 10:24:23 crc kubenswrapper[4811]: I0219 10:24:23.158024 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tm6gw" Feb 19 10:24:23 crc kubenswrapper[4811]: I0219 10:24:23.158440 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tm6gw" Feb 19 10:24:23 crc kubenswrapper[4811]: I0219 10:24:23.215015 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tm6gw" Feb 19 10:24:23 crc kubenswrapper[4811]: I0219 10:24:23.828916 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tm6gw" Feb 19 10:24:23 crc kubenswrapper[4811]: I0219 10:24:23.874933 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tm6gw"] Feb 19 10:24:25 crc kubenswrapper[4811]: I0219 10:24:25.281840 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58-cert\") pod \"infra-operator-controller-manager-79d975b745-bqpcc\" (UID: \"6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bqpcc" Feb 19 10:24:25 crc kubenswrapper[4811]: E0219 10:24:25.282009 4811 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 10:24:25 crc kubenswrapper[4811]: E0219 10:24:25.282057 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58-cert podName:6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58 nodeName:}" failed. No retries permitted until 2026-02-19 10:24:33.282042728 +0000 UTC m=+961.808507414 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58-cert") pod "infra-operator-controller-manager-79d975b745-bqpcc" (UID: "6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58") : secret "infra-operator-webhook-server-cert" not found Feb 19 10:24:25 crc kubenswrapper[4811]: I0219 10:24:25.484800 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/484c4e4b-0de6-427c-adde-b597fed189a7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv\" (UID: \"484c4e4b-0de6-427c-adde-b597fed189a7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv" Feb 19 10:24:25 crc kubenswrapper[4811]: E0219 10:24:25.484980 4811 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 10:24:25 crc kubenswrapper[4811]: E0219 10:24:25.485280 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/484c4e4b-0de6-427c-adde-b597fed189a7-cert podName:484c4e4b-0de6-427c-adde-b597fed189a7 nodeName:}" failed. No retries permitted until 2026-02-19 10:24:33.48526242 +0000 UTC m=+962.011727126 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/484c4e4b-0de6-427c-adde-b597fed189a7-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv" (UID: "484c4e4b-0de6-427c-adde-b597fed189a7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 10:24:25 crc kubenswrapper[4811]: I0219 10:24:25.798413 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tm6gw" podUID="1e7e0a50-1704-4887-ab61-3e8e69345ab9" containerName="registry-server" containerID="cri-o://3dcf8fc9b4aa3d3f6e6e98e90f48d966abc8c1e32c8684b5f49f14a51810ca5c" gracePeriod=2 Feb 19 10:24:25 crc kubenswrapper[4811]: I0219 10:24:25.863273 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qftq4"] Feb 19 10:24:25 crc kubenswrapper[4811]: I0219 10:24:25.865413 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qftq4" Feb 19 10:24:25 crc kubenswrapper[4811]: I0219 10:24:25.877633 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qftq4"] Feb 19 10:24:25 crc kubenswrapper[4811]: I0219 10:24:25.992863 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq8nr\" (UniqueName: \"kubernetes.io/projected/8b733b4b-8597-484d-a6a6-b5635e80b483-kube-api-access-hq8nr\") pod \"certified-operators-qftq4\" (UID: \"8b733b4b-8597-484d-a6a6-b5635e80b483\") " pod="openshift-marketplace/certified-operators-qftq4" Feb 19 10:24:25 crc kubenswrapper[4811]: I0219 10:24:25.992962 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-metrics-certs\") pod \"openstack-operator-controller-manager-7b558f465f-z4vhq\" (UID: \"7f4a5782-be5f-4a0d-be5d-b8a5014c4711\") " pod="openstack-operators/openstack-operator-controller-manager-7b558f465f-z4vhq" Feb 19 10:24:25 crc kubenswrapper[4811]: I0219 10:24:25.993029 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b733b4b-8597-484d-a6a6-b5635e80b483-utilities\") pod \"certified-operators-qftq4\" (UID: \"8b733b4b-8597-484d-a6a6-b5635e80b483\") " pod="openshift-marketplace/certified-operators-qftq4" Feb 19 10:24:25 crc kubenswrapper[4811]: I0219 10:24:25.993086 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-webhook-certs\") pod \"openstack-operator-controller-manager-7b558f465f-z4vhq\" (UID: \"7f4a5782-be5f-4a0d-be5d-b8a5014c4711\") " pod="openstack-operators/openstack-operator-controller-manager-7b558f465f-z4vhq" Feb 19 10:24:25 crc kubenswrapper[4811]: I0219 10:24:25.993110 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b733b4b-8597-484d-a6a6-b5635e80b483-catalog-content\") pod \"certified-operators-qftq4\" (UID: \"8b733b4b-8597-484d-a6a6-b5635e80b483\") " pod="openshift-marketplace/certified-operators-qftq4" Feb 19 10:24:25 crc kubenswrapper[4811]: E0219 10:24:25.993165 4811 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 10:24:25 crc kubenswrapper[4811]: E0219 10:24:25.993251 4811 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 10:24:25 crc kubenswrapper[4811]: E0219 10:24:25.993279 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-metrics-certs podName:7f4a5782-be5f-4a0d-be5d-b8a5014c4711 nodeName:}" failed. No retries permitted until 2026-02-19 10:24:33.99325587 +0000 UTC m=+962.519720616 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-metrics-certs") pod "openstack-operator-controller-manager-7b558f465f-z4vhq" (UID: "7f4a5782-be5f-4a0d-be5d-b8a5014c4711") : secret "metrics-server-cert" not found Feb 19 10:24:25 crc kubenswrapper[4811]: E0219 10:24:25.993307 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-webhook-certs podName:7f4a5782-be5f-4a0d-be5d-b8a5014c4711 nodeName:}" failed. No retries permitted until 2026-02-19 10:24:33.993289591 +0000 UTC m=+962.519754377 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-webhook-certs") pod "openstack-operator-controller-manager-7b558f465f-z4vhq" (UID: "7f4a5782-be5f-4a0d-be5d-b8a5014c4711") : secret "webhook-server-cert" not found Feb 19 10:24:26 crc kubenswrapper[4811]: I0219 10:24:26.095168 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b733b4b-8597-484d-a6a6-b5635e80b483-catalog-content\") pod \"certified-operators-qftq4\" (UID: \"8b733b4b-8597-484d-a6a6-b5635e80b483\") " pod="openshift-marketplace/certified-operators-qftq4" Feb 19 10:24:26 crc kubenswrapper[4811]: I0219 10:24:26.095255 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq8nr\" (UniqueName: \"kubernetes.io/projected/8b733b4b-8597-484d-a6a6-b5635e80b483-kube-api-access-hq8nr\") pod \"certified-operators-qftq4\" (UID: \"8b733b4b-8597-484d-a6a6-b5635e80b483\") " pod="openshift-marketplace/certified-operators-qftq4" Feb 19 10:24:26 crc kubenswrapper[4811]: I0219 10:24:26.095353 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b733b4b-8597-484d-a6a6-b5635e80b483-utilities\") pod \"certified-operators-qftq4\" (UID: \"8b733b4b-8597-484d-a6a6-b5635e80b483\") " pod="openshift-marketplace/certified-operators-qftq4" Feb 19 10:24:26 crc kubenswrapper[4811]: I0219 10:24:26.095742 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b733b4b-8597-484d-a6a6-b5635e80b483-catalog-content\") pod \"certified-operators-qftq4\" (UID: \"8b733b4b-8597-484d-a6a6-b5635e80b483\") " pod="openshift-marketplace/certified-operators-qftq4" Feb 19 10:24:26 crc kubenswrapper[4811]: I0219 10:24:26.095783 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b733b4b-8597-484d-a6a6-b5635e80b483-utilities\") pod \"certified-operators-qftq4\" (UID: \"8b733b4b-8597-484d-a6a6-b5635e80b483\") " pod="openshift-marketplace/certified-operators-qftq4" Feb 19 10:24:26 crc kubenswrapper[4811]: I0219 10:24:26.121092 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq8nr\" (UniqueName: \"kubernetes.io/projected/8b733b4b-8597-484d-a6a6-b5635e80b483-kube-api-access-hq8nr\") pod \"certified-operators-qftq4\" (UID: \"8b733b4b-8597-484d-a6a6-b5635e80b483\") " pod="openshift-marketplace/certified-operators-qftq4" Feb 19 10:24:26 crc kubenswrapper[4811]: I0219 10:24:26.222629 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qftq4" Feb 19 10:24:26 crc kubenswrapper[4811]: I0219 10:24:26.807028 4811 generic.go:334] "Generic (PLEG): container finished" podID="1e7e0a50-1704-4887-ab61-3e8e69345ab9" containerID="3dcf8fc9b4aa3d3f6e6e98e90f48d966abc8c1e32c8684b5f49f14a51810ca5c" exitCode=0 Feb 19 10:24:26 crc kubenswrapper[4811]: I0219 10:24:26.807071 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tm6gw" event={"ID":"1e7e0a50-1704-4887-ab61-3e8e69345ab9","Type":"ContainerDied","Data":"3dcf8fc9b4aa3d3f6e6e98e90f48d966abc8c1e32c8684b5f49f14a51810ca5c"} Feb 19 10:24:31 crc kubenswrapper[4811]: E0219 10:24:31.349629 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979" Feb 19 10:24:31 crc kubenswrapper[4811]: E0219 10:24:31.350249 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nvgph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-5d946d989d-c84zh_openstack-operators(f7a1ee5c-f40c-4b32-9ff9-46d3a11f64a3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 10:24:31 crc kubenswrapper[4811]: E0219 10:24:31.351582 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-c84zh" podUID="f7a1ee5c-f40c-4b32-9ff9-46d3a11f64a3" Feb 19 10:24:31 crc kubenswrapper[4811]: E0219 10:24:31.840596 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-c84zh" podUID="f7a1ee5c-f40c-4b32-9ff9-46d3a11f64a3" Feb 19 10:24:32 crc kubenswrapper[4811]: E0219 10:24:32.838163 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99" Feb 19 10:24:32 crc kubenswrapper[4811]: E0219 10:24:32.838421 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jnlwp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7f45b4ff68-456kj_openstack-operators(eab9f83b-209d-4688-8c7f-07840bae01ff): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 10:24:32 crc kubenswrapper[4811]: E0219 10:24:32.840336 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-456kj" podUID="eab9f83b-209d-4688-8c7f-07840bae01ff" Feb 19 10:24:33 crc kubenswrapper[4811]: E0219 10:24:33.158090 4811 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3dcf8fc9b4aa3d3f6e6e98e90f48d966abc8c1e32c8684b5f49f14a51810ca5c is running failed: container process not found" containerID="3dcf8fc9b4aa3d3f6e6e98e90f48d966abc8c1e32c8684b5f49f14a51810ca5c" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 10:24:33 crc kubenswrapper[4811]: E0219 10:24:33.158398 4811 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3dcf8fc9b4aa3d3f6e6e98e90f48d966abc8c1e32c8684b5f49f14a51810ca5c is running failed: container process not found" containerID="3dcf8fc9b4aa3d3f6e6e98e90f48d966abc8c1e32c8684b5f49f14a51810ca5c" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 10:24:33 crc kubenswrapper[4811]: E0219 10:24:33.158617 4811 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3dcf8fc9b4aa3d3f6e6e98e90f48d966abc8c1e32c8684b5f49f14a51810ca5c is running failed: container process not found" containerID="3dcf8fc9b4aa3d3f6e6e98e90f48d966abc8c1e32c8684b5f49f14a51810ca5c" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 10:24:33 crc kubenswrapper[4811]: E0219 10:24:33.158649 4811 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3dcf8fc9b4aa3d3f6e6e98e90f48d966abc8c1e32c8684b5f49f14a51810ca5c is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-tm6gw" podUID="1e7e0a50-1704-4887-ab61-3e8e69345ab9" containerName="registry-server" Feb 19 10:24:33 crc kubenswrapper[4811]: I0219 10:24:33.344714 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58-cert\") pod \"infra-operator-controller-manager-79d975b745-bqpcc\" (UID: \"6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bqpcc" Feb 19 10:24:33 crc kubenswrapper[4811]: E0219 10:24:33.345283 4811 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 10:24:33 crc kubenswrapper[4811]: E0219 10:24:33.345334 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58-cert podName:6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58 nodeName:}" failed. No retries permitted until 2026-02-19 10:24:49.345319936 +0000 UTC m=+977.871784622 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58-cert") pod "infra-operator-controller-manager-79d975b745-bqpcc" (UID: "6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58") : secret "infra-operator-webhook-server-cert" not found Feb 19 10:24:33 crc kubenswrapper[4811]: E0219 10:24:33.422628 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2" Feb 19 10:24:33 crc kubenswrapper[4811]: E0219 10:24:33.422824 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nd55r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-69f49c598c-9fqp5_openstack-operators(5bf4b28e-b5bd-4caa-a07e-b35c20fa16e7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 10:24:33 crc kubenswrapper[4811]: E0219 10:24:33.424136 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9fqp5" podUID="5bf4b28e-b5bd-4caa-a07e-b35c20fa16e7" Feb 19 10:24:33 crc kubenswrapper[4811]: I0219 10:24:33.470616 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tm6gw" Feb 19 10:24:33 crc kubenswrapper[4811]: I0219 10:24:33.549917 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/484c4e4b-0de6-427c-adde-b597fed189a7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv\" (UID: \"484c4e4b-0de6-427c-adde-b597fed189a7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv" Feb 19 10:24:33 crc kubenswrapper[4811]: E0219 10:24:33.550080 4811 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 10:24:33 crc kubenswrapper[4811]: E0219 10:24:33.550192 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/484c4e4b-0de6-427c-adde-b597fed189a7-cert podName:484c4e4b-0de6-427c-adde-b597fed189a7 nodeName:}" failed. No retries permitted until 2026-02-19 10:24:49.550176491 +0000 UTC m=+978.076641177 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/484c4e4b-0de6-427c-adde-b597fed189a7-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv" (UID: "484c4e4b-0de6-427c-adde-b597fed189a7") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 10:24:33 crc kubenswrapper[4811]: I0219 10:24:33.651398 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9rg4\" (UniqueName: \"kubernetes.io/projected/1e7e0a50-1704-4887-ab61-3e8e69345ab9-kube-api-access-n9rg4\") pod \"1e7e0a50-1704-4887-ab61-3e8e69345ab9\" (UID: \"1e7e0a50-1704-4887-ab61-3e8e69345ab9\") " Feb 19 10:24:33 crc kubenswrapper[4811]: I0219 10:24:33.651516 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e7e0a50-1704-4887-ab61-3e8e69345ab9-catalog-content\") pod \"1e7e0a50-1704-4887-ab61-3e8e69345ab9\" (UID: \"1e7e0a50-1704-4887-ab61-3e8e69345ab9\") " Feb 19 10:24:33 crc kubenswrapper[4811]: I0219 10:24:33.651576 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e7e0a50-1704-4887-ab61-3e8e69345ab9-utilities\") pod \"1e7e0a50-1704-4887-ab61-3e8e69345ab9\" (UID: \"1e7e0a50-1704-4887-ab61-3e8e69345ab9\") " Feb 19 10:24:33 crc kubenswrapper[4811]: I0219 10:24:33.652668 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e7e0a50-1704-4887-ab61-3e8e69345ab9-utilities" (OuterVolumeSpecName: "utilities") pod "1e7e0a50-1704-4887-ab61-3e8e69345ab9" (UID: "1e7e0a50-1704-4887-ab61-3e8e69345ab9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:24:33 crc kubenswrapper[4811]: I0219 10:24:33.659368 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e7e0a50-1704-4887-ab61-3e8e69345ab9-kube-api-access-n9rg4" (OuterVolumeSpecName: "kube-api-access-n9rg4") pod "1e7e0a50-1704-4887-ab61-3e8e69345ab9" (UID: "1e7e0a50-1704-4887-ab61-3e8e69345ab9"). InnerVolumeSpecName "kube-api-access-n9rg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:24:33 crc kubenswrapper[4811]: I0219 10:24:33.702293 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e7e0a50-1704-4887-ab61-3e8e69345ab9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e7e0a50-1704-4887-ab61-3e8e69345ab9" (UID: "1e7e0a50-1704-4887-ab61-3e8e69345ab9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:24:33 crc kubenswrapper[4811]: I0219 10:24:33.753285 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9rg4\" (UniqueName: \"kubernetes.io/projected/1e7e0a50-1704-4887-ab61-3e8e69345ab9-kube-api-access-n9rg4\") on node \"crc\" DevicePath \"\"" Feb 19 10:24:33 crc kubenswrapper[4811]: I0219 10:24:33.753318 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e7e0a50-1704-4887-ab61-3e8e69345ab9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:24:33 crc kubenswrapper[4811]: I0219 10:24:33.753329 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e7e0a50-1704-4887-ab61-3e8e69345ab9-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:24:33 crc kubenswrapper[4811]: I0219 10:24:33.860370 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tm6gw" event={"ID":"1e7e0a50-1704-4887-ab61-3e8e69345ab9","Type":"ContainerDied","Data":"b5d1afe9d99c7bc99bfa07b1b52b390e2cb51f1d1b5af72316949deff01bbe24"} Feb 19 10:24:33 crc kubenswrapper[4811]: I0219 10:24:33.860435 4811 scope.go:117] "RemoveContainer" containerID="3dcf8fc9b4aa3d3f6e6e98e90f48d966abc8c1e32c8684b5f49f14a51810ca5c" Feb 19 10:24:33 crc kubenswrapper[4811]: I0219 10:24:33.860657 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tm6gw" Feb 19 10:24:33 crc kubenswrapper[4811]: E0219 10:24:33.863032 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2\\\"\"" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9fqp5" podUID="5bf4b28e-b5bd-4caa-a07e-b35c20fa16e7" Feb 19 10:24:33 crc kubenswrapper[4811]: E0219 10:24:33.867607 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-456kj" podUID="eab9f83b-209d-4688-8c7f-07840bae01ff" Feb 19 10:24:33 crc kubenswrapper[4811]: I0219 10:24:33.904721 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tm6gw"] Feb 19 10:24:33 crc kubenswrapper[4811]: I0219 10:24:33.911903 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tm6gw"] Feb 19 10:24:34 crc kubenswrapper[4811]: I0219 10:24:34.057281 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-webhook-certs\") pod \"openstack-operator-controller-manager-7b558f465f-z4vhq\" (UID: \"7f4a5782-be5f-4a0d-be5d-b8a5014c4711\") " pod="openstack-operators/openstack-operator-controller-manager-7b558f465f-z4vhq" Feb 19 10:24:34 crc kubenswrapper[4811]: I0219 10:24:34.057404 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-metrics-certs\") pod \"openstack-operator-controller-manager-7b558f465f-z4vhq\" (UID: \"7f4a5782-be5f-4a0d-be5d-b8a5014c4711\") " pod="openstack-operators/openstack-operator-controller-manager-7b558f465f-z4vhq" Feb 19 10:24:34 crc kubenswrapper[4811]: E0219 10:24:34.057499 4811 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 10:24:34 crc kubenswrapper[4811]: E0219 10:24:34.057573 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-metrics-certs podName:7f4a5782-be5f-4a0d-be5d-b8a5014c4711 nodeName:}" failed. No retries permitted until 2026-02-19 10:24:50.057554414 +0000 UTC m=+978.584019100 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-metrics-certs") pod "openstack-operator-controller-manager-7b558f465f-z4vhq" (UID: "7f4a5782-be5f-4a0d-be5d-b8a5014c4711") : secret "metrics-server-cert" not found Feb 19 10:24:34 crc kubenswrapper[4811]: I0219 10:24:34.061010 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-webhook-certs\") pod \"openstack-operator-controller-manager-7b558f465f-z4vhq\" (UID: \"7f4a5782-be5f-4a0d-be5d-b8a5014c4711\") " pod="openstack-operators/openstack-operator-controller-manager-7b558f465f-z4vhq" Feb 19 10:24:34 crc kubenswrapper[4811]: E0219 10:24:34.124382 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 19 10:24:34 crc kubenswrapper[4811]: E0219 10:24:34.124559 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d8xn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-fvdc7_openstack-operators(b5e1b149-0349-4c1c-be95-b9c540da49f1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 10:24:34 crc kubenswrapper[4811]: E0219 10:24:34.125761 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-fvdc7" podUID="b5e1b149-0349-4c1c-be95-b9c540da49f1" Feb 19 10:24:34 crc kubenswrapper[4811]: I0219 10:24:34.593518 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e7e0a50-1704-4887-ab61-3e8e69345ab9" path="/var/lib/kubelet/pods/1e7e0a50-1704-4887-ab61-3e8e69345ab9/volumes" Feb 19 10:24:34 crc kubenswrapper[4811]: E0219 10:24:34.870913 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-fvdc7" podUID="b5e1b149-0349-4c1c-be95-b9c540da49f1" Feb 19 10:24:35 crc kubenswrapper[4811]: E0219 10:24:35.142149 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 19 10:24:35 crc kubenswrapper[4811]: E0219 10:24:35.142446 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-75nmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-2lvw2_openstack-operators(6c3aab50-1956-41f1-be69-aa976020f25e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 10:24:35 crc kubenswrapper[4811]: E0219 10:24:35.143740 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-2lvw2" podUID="6c3aab50-1956-41f1-be69-aa976020f25e" Feb 19 10:24:35 crc kubenswrapper[4811]: I0219 10:24:35.759336 4811 scope.go:117] "RemoveContainer" containerID="91f2dc39157089b42db0deaea80db28d4aa1bfcb9202457a96d92f84fdf052b1" Feb 19 10:24:35 crc kubenswrapper[4811]: E0219 10:24:35.881971 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-2lvw2" podUID="6c3aab50-1956-41f1-be69-aa976020f25e" Feb 19 10:24:36 crc kubenswrapper[4811]: I0219 10:24:36.184890 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qftq4"] Feb 19 10:24:36 crc kubenswrapper[4811]: W0219 10:24:36.414556 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b733b4b_8597_484d_a6a6_b5635e80b483.slice/crio-f670cfb3825ad981c2cceaac0699dcdf5ade55fc48af8d8aa4b4083704b4d4d9 WatchSource:0}: Error finding container f670cfb3825ad981c2cceaac0699dcdf5ade55fc48af8d8aa4b4083704b4d4d9: Status 404 returned error can't find the container with id f670cfb3825ad981c2cceaac0699dcdf5ade55fc48af8d8aa4b4083704b4d4d9 Feb 19 10:24:36 crc kubenswrapper[4811]: I0219 10:24:36.891363 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qftq4" event={"ID":"8b733b4b-8597-484d-a6a6-b5635e80b483","Type":"ContainerStarted","Data":"f670cfb3825ad981c2cceaac0699dcdf5ade55fc48af8d8aa4b4083704b4d4d9"} Feb 19 10:24:38 crc kubenswrapper[4811]: I0219 10:24:38.124437 4811 scope.go:117] "RemoveContainer" containerID="b50ae376f87b7953fcd4dce4259c4355d49f3d2757b88fa202101aa04f32feca" Feb 19 10:24:39 crc kubenswrapper[4811]: I0219 10:24:39.921287 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-8xp7b" event={"ID":"75dd64a6-abdc-4d68-b9ca-abe41484b894","Type":"ContainerStarted","Data":"be6081bf2a339a0ae9f09f2f5534a4cd1a12ce19ec94f4a839a5768fa3584758"} Feb 19 10:24:39 crc kubenswrapper[4811]: I0219 10:24:39.922416 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-8xp7b" Feb 19 10:24:39 crc kubenswrapper[4811]: I0219 10:24:39.931890 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5vw7t" event={"ID":"9def098d-8e58-4145-a851-134a7b2620b5","Type":"ContainerStarted","Data":"dff80a412dcfc05a50180829055b42e9f1320c1634dbed8719f0d628e3dd14d5"} Feb 19 10:24:39 crc kubenswrapper[4811]: I0219 10:24:39.938065 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rdtd4" event={"ID":"03bff346-c835-4fba-9950-59e9e86d8a23","Type":"ContainerStarted","Data":"d9b832ed7a23a222328afb76d2891bf1de88ffcdc19cabdf6ac7fede55c96607"} Feb 19 10:24:39 crc kubenswrapper[4811]: I0219 10:24:39.938942 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rdtd4" Feb 19 10:24:39 crc kubenswrapper[4811]: I0219 10:24:39.944025 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-8xp7b" podStartSLOduration=6.604620698 podStartE2EDuration="22.944009015s" podCreationTimestamp="2026-02-19 10:24:17 +0000 UTC" firstStartedPulling="2026-02-19 10:24:19.408001209 +0000 UTC m=+947.934465895" lastFinishedPulling="2026-02-19 10:24:35.747389526 +0000 UTC m=+964.273854212" observedRunningTime="2026-02-19 10:24:39.939294453 +0000 UTC m=+968.465759139" watchObservedRunningTime="2026-02-19 10:24:39.944009015 +0000 UTC m=+968.470473701" Feb 19 10:24:39 crc kubenswrapper[4811]: I0219 10:24:39.952714 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-wh6gz" Feb 19 10:24:39 crc kubenswrapper[4811]: I0219 10:24:39.964107 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6f8zw" event={"ID":"058e8da6-f607-4153-bb9e-af486f6c87e7","Type":"ContainerStarted","Data":"335e50e88347acfa09f467155ca8fcb48885aadcb5ff89795c6b028eaebcc138"} Feb 19 10:24:39 crc kubenswrapper[4811]: I0219 10:24:39.964832 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6f8zw" Feb 19 10:24:39 crc kubenswrapper[4811]: I0219 10:24:39.967262 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rdtd4" podStartSLOduration=7.157700447 podStartE2EDuration="22.967247568s" podCreationTimestamp="2026-02-19 10:24:17 +0000 UTC" firstStartedPulling="2026-02-19 10:24:19.298036636 +0000 UTC m=+947.824501322" lastFinishedPulling="2026-02-19 10:24:35.107583757 +0000 UTC m=+963.634048443" observedRunningTime="2026-02-19 10:24:39.965090842 +0000 UTC m=+968.491555528" watchObservedRunningTime="2026-02-19 10:24:39.967247568 +0000 UTC m=+968.493712254" Feb 19 10:24:39 crc kubenswrapper[4811]: I0219 10:24:39.971757 4811 generic.go:334] "Generic (PLEG): container finished" podID="8b733b4b-8597-484d-a6a6-b5635e80b483" containerID="9e5c7aac2b492e67cfcfb8850359826bebcf0d005ab9114e3b0e353e1f1099f6" exitCode=0 Feb 19 10:24:39 crc kubenswrapper[4811]: I0219 10:24:39.972625 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qftq4" event={"ID":"8b733b4b-8597-484d-a6a6-b5635e80b483","Type":"ContainerDied","Data":"9e5c7aac2b492e67cfcfb8850359826bebcf0d005ab9114e3b0e353e1f1099f6"} Feb 19 10:24:39 crc kubenswrapper[4811]: I0219 10:24:39.981967 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-frxgp" event={"ID":"fe03e75a-b898-4de2-8569-bce0add53798","Type":"ContainerStarted","Data":"0dc83943de5f422465a3afda60e3a46ba6ac4ba7265f4f0a6d0ec921c2f1532d"} Feb 19 10:24:39 crc kubenswrapper[4811]: I0219 10:24:39.982476 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-frxgp" Feb 19 10:24:39 crc kubenswrapper[4811]: I0219 10:24:39.995225 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-h5g6x" event={"ID":"ed315fb7-c37f-4d65-a048-78c1a354ceb3","Type":"ContainerStarted","Data":"856069ed685e416c4f6eb7fb818156fb8b5440e15b22d93967ed0050825a5b07"} Feb 19 10:24:39 crc kubenswrapper[4811]: I0219 10:24:39.995839 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-h5g6x" Feb 19 10:24:39 crc kubenswrapper[4811]: I0219 10:24:39.997691 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5vw7t" podStartSLOduration=2.988458149 podStartE2EDuration="22.997679418s" podCreationTimestamp="2026-02-19 10:24:17 +0000 UTC" firstStartedPulling="2026-02-19 10:24:19.421830428 +0000 UTC m=+947.948295114" lastFinishedPulling="2026-02-19 10:24:39.431051697 +0000 UTC m=+967.957516383" observedRunningTime="2026-02-19 10:24:39.991935829 +0000 UTC m=+968.518400515" watchObservedRunningTime="2026-02-19 10:24:39.997679418 +0000 UTC m=+968.524144104" Feb 19 10:24:40 crc kubenswrapper[4811]: I0219 10:24:40.004536 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-f64jn" event={"ID":"8841254d-fb8a-4391-984a-02a39949ca31","Type":"ContainerStarted","Data":"2e763548c89dee210b13e0f86b4f8b6f9d5a4abba2e36b40ea6d1d3c404b0994"} Feb 19 10:24:40 crc kubenswrapper[4811]: I0219 10:24:40.005282 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-f64jn" Feb 19 10:24:40 crc kubenswrapper[4811]: I0219 10:24:40.008072 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-s42cl" event={"ID":"57410614-54e3-49cb-8afe-ee60a861f161","Type":"ContainerStarted","Data":"37630446f7b87752b19c40208585851b9b0ddcf890788892d6d8fe5888d4c3a8"} Feb 19 10:24:40 crc kubenswrapper[4811]: I0219 10:24:40.008880 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-s42cl" Feb 19 10:24:40 crc kubenswrapper[4811]: I0219 10:24:40.080045 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-frxgp" podStartSLOduration=5.980716871 podStartE2EDuration="23.080027424s" podCreationTimestamp="2026-02-19 10:24:17 +0000 UTC" firstStartedPulling="2026-02-19 10:24:19.290735716 +0000 UTC m=+947.817200402" lastFinishedPulling="2026-02-19 10:24:36.390046269 +0000 UTC m=+964.916510955" observedRunningTime="2026-02-19 10:24:40.075668591 +0000 UTC m=+968.602133277" watchObservedRunningTime="2026-02-19 10:24:40.080027424 +0000 UTC m=+968.606492110" Feb 19 10:24:40 crc kubenswrapper[4811]: I0219 10:24:40.096522 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-wh6gz" podStartSLOduration=6.386052126 podStartE2EDuration="23.096507482s" podCreationTimestamp="2026-02-19 10:24:17 +0000 UTC" firstStartedPulling="2026-02-19 10:24:18.396070284 +0000 UTC m=+946.922534970" lastFinishedPulling="2026-02-19 10:24:35.10652564 +0000 UTC m=+963.632990326" observedRunningTime="2026-02-19 10:24:40.094732956 +0000 UTC m=+968.621197642" watchObservedRunningTime="2026-02-19 10:24:40.096507482 +0000 UTC m=+968.622972168" Feb 19 10:24:40 crc kubenswrapper[4811]: I0219 10:24:40.128706 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6f8zw" podStartSLOduration=7.336551167 podStartE2EDuration="23.128689057s" podCreationTimestamp="2026-02-19 10:24:17 +0000 UTC" firstStartedPulling="2026-02-19 10:24:19.315484918 +0000 UTC m=+947.841949604" lastFinishedPulling="2026-02-19 10:24:35.107622798 +0000 UTC m=+963.634087494" observedRunningTime="2026-02-19 10:24:40.123470141 +0000 UTC m=+968.649934827" watchObservedRunningTime="2026-02-19 10:24:40.128689057 +0000 UTC m=+968.655153743" Feb 19 10:24:40 crc kubenswrapper[4811]: I0219 10:24:40.148856 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-h5g6x" podStartSLOduration=7.354573815 podStartE2EDuration="23.14883916s" podCreationTimestamp="2026-02-19 10:24:17 +0000 UTC" firstStartedPulling="2026-02-19 10:24:19.314156874 +0000 UTC m=+947.840621560" lastFinishedPulling="2026-02-19 10:24:35.108422219 +0000 UTC m=+963.634886905" observedRunningTime="2026-02-19 10:24:40.144773424 +0000 UTC m=+968.671238110" watchObservedRunningTime="2026-02-19 10:24:40.14883916 +0000 UTC m=+968.675303846" Feb 19 10:24:40 crc kubenswrapper[4811]: I0219 10:24:40.195103 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-s42cl" podStartSLOduration=5.894321539 podStartE2EDuration="23.195082449s" podCreationTimestamp="2026-02-19 10:24:17 +0000 UTC" firstStartedPulling="2026-02-19 10:24:18.441898943 +0000 UTC m=+946.968363629" lastFinishedPulling="2026-02-19 10:24:35.742659853 +0000 UTC m=+964.269124539" observedRunningTime="2026-02-19 10:24:40.185557562 +0000 UTC m=+968.712022248" watchObservedRunningTime="2026-02-19 10:24:40.195082449 +0000 UTC m=+968.721547135" Feb 19 10:24:40 crc kubenswrapper[4811]: I0219 10:24:40.226628 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-f64jn" podStartSLOduration=3.298085872 podStartE2EDuration="23.226606547s" podCreationTimestamp="2026-02-19 10:24:17 +0000 UTC" firstStartedPulling="2026-02-19 10:24:19.36101719 +0000 UTC m=+947.887481876" lastFinishedPulling="2026-02-19 10:24:39.289537865 +0000 UTC m=+967.816002551" observedRunningTime="2026-02-19 10:24:40.222900511 +0000 UTC m=+968.749365197" watchObservedRunningTime="2026-02-19 10:24:40.226606547 +0000 UTC m=+968.753071233" Feb 19 10:24:41 crc kubenswrapper[4811]: I0219 10:24:41.015971 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-wh6gz" event={"ID":"f39b017a-34d2-491b-a44a-c3fc5ea573b7","Type":"ContainerStarted","Data":"743a34da8477b85ac1cf191534f0a8b4eca567424edd09e5b39bacd5551acff6"} Feb 19 10:24:41 crc kubenswrapper[4811]: I0219 10:24:41.018660 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-czvjr" event={"ID":"b4dc2b86-5147-4420-a9ef-a2ef94991ed3","Type":"ContainerStarted","Data":"aa979a77016b65b2560883d3f9a121e98e44f96960a075d192b69e78fcae1ecf"} Feb 19 10:24:41 crc kubenswrapper[4811]: I0219 10:24:41.018897 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-czvjr" Feb 19 10:24:41 crc kubenswrapper[4811]: I0219 10:24:41.020320 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-d66mq" event={"ID":"46b8deed-b9de-4b32-b8e4-c3edc4ad97f9","Type":"ContainerStarted","Data":"6fd03bf2b6304190e567f6370e0ff889e96200a44e55cb2ab21891c5dc604795"} Feb 19 10:24:41 crc kubenswrapper[4811]: I0219 10:24:41.020483 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-d66mq" Feb 19 10:24:41 crc kubenswrapper[4811]: I0219 10:24:41.026867 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cjqjn" event={"ID":"3c49a030-0a6a-434c-986e-6957d9d27043","Type":"ContainerStarted","Data":"847a887fd27810e1498cff066231b0d836af273919d1a7dd0fd5f97d6d490db8"} Feb 19 10:24:41 crc kubenswrapper[4811]: I0219 10:24:41.027025 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cjqjn" Feb 19 10:24:41 crc kubenswrapper[4811]: I0219 10:24:41.032011 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-jzwmv" event={"ID":"34488f24-fe02-449f-a343-288ee29f5ec4","Type":"ContainerStarted","Data":"b54257721f06f05b53697a2a9f89efe393601aa43138f1edb98f983fdd346aa2"} Feb 19 10:24:41 crc kubenswrapper[4811]: I0219 10:24:41.032259 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-jzwmv" Feb 19 10:24:41 crc kubenswrapper[4811]: I0219 10:24:41.035943 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-4tg5n" event={"ID":"0dd03f3f-2d9f-433e-942f-ebed08429ccf","Type":"ContainerStarted","Data":"a26918b362b1b47390b07651b58bb350c959dca8998405d01d673d3e542bf118"} Feb 19 10:24:41 crc kubenswrapper[4811]: I0219 10:24:41.036830 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-4tg5n" Feb 19 10:24:41 crc kubenswrapper[4811]: I0219 10:24:41.039144 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-dzgff" event={"ID":"f5e56e20-0181-4f50-a112-b90aeb0ea9f4","Type":"ContainerStarted","Data":"35b2c187f8c66d33384fd627faa9f9e70a7e8cdd54c85c7683ed8cce4337e0f4"} Feb 19 10:24:41 crc kubenswrapper[4811]: I0219 10:24:41.039178 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-dzgff" Feb 19 10:24:41 crc kubenswrapper[4811]: I0219 10:24:41.081366 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-czvjr" podStartSLOduration=4.166616706 podStartE2EDuration="24.081351493s" podCreationTimestamp="2026-02-19 10:24:17 +0000 UTC" firstStartedPulling="2026-02-19 10:24:19.373227237 +0000 UTC m=+947.899691923" lastFinishedPulling="2026-02-19 10:24:39.287962024 +0000 UTC m=+967.814426710" observedRunningTime="2026-02-19 10:24:41.052968607 +0000 UTC m=+969.579433293" watchObservedRunningTime="2026-02-19 10:24:41.081351493 +0000 UTC m=+969.607816179" Feb 19 10:24:41 crc kubenswrapper[4811]: I0219 10:24:41.082606 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cjqjn" podStartSLOduration=7.850731188 podStartE2EDuration="24.082598216s" podCreationTimestamp="2026-02-19 10:24:17 +0000 UTC" firstStartedPulling="2026-02-19 10:24:18.874878117 +0000 UTC m=+947.401342793" lastFinishedPulling="2026-02-19 10:24:35.106745135 +0000 UTC m=+963.633209821" observedRunningTime="2026-02-19 10:24:41.078744026 +0000 UTC m=+969.605208712" watchObservedRunningTime="2026-02-19 10:24:41.082598216 +0000 UTC m=+969.609062892" Feb 19 10:24:41 crc kubenswrapper[4811]: I0219 10:24:41.119807 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-4tg5n" podStartSLOduration=4.949517688 podStartE2EDuration="24.119787841s" podCreationTimestamp="2026-02-19 10:24:17 +0000 UTC" firstStartedPulling="2026-02-19 10:24:19.383291748 +0000 UTC m=+947.909756434" lastFinishedPulling="2026-02-19 10:24:38.553561901 +0000 UTC m=+967.080026587" observedRunningTime="2026-02-19 10:24:41.096617229 +0000 UTC m=+969.623081915" watchObservedRunningTime="2026-02-19 10:24:41.119787841 +0000 UTC m=+969.646252517" Feb 19 10:24:41 crc kubenswrapper[4811]: I0219 10:24:41.145617 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-dzgff" podStartSLOduration=6.841130994 podStartE2EDuration="24.14559652s" podCreationTimestamp="2026-02-19 10:24:17 +0000 UTC" firstStartedPulling="2026-02-19 10:24:18.438190477 +0000 UTC m=+946.964655163" lastFinishedPulling="2026-02-19 10:24:35.742656003 +0000 UTC m=+964.269120689" observedRunningTime="2026-02-19 10:24:41.124407101 +0000 UTC m=+969.650871787" watchObservedRunningTime="2026-02-19 10:24:41.14559652 +0000 UTC m=+969.672061206" Feb 19 10:24:41 crc kubenswrapper[4811]: I0219 10:24:41.147204 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-d66mq" podStartSLOduration=7.874293569 podStartE2EDuration="24.147196152s" podCreationTimestamp="2026-02-19 10:24:17 +0000 UTC" firstStartedPulling="2026-02-19 10:24:18.833928195 +0000 UTC m=+947.360392891" lastFinishedPulling="2026-02-19 10:24:35.106830788 +0000 UTC m=+963.633295474" observedRunningTime="2026-02-19 10:24:41.142681475 +0000 UTC m=+969.669146171" watchObservedRunningTime="2026-02-19 10:24:41.147196152 +0000 UTC m=+969.673660838" Feb 19 10:24:41 crc kubenswrapper[4811]: I0219 10:24:41.172523 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-jzwmv" podStartSLOduration=4.109228546 podStartE2EDuration="24.172499078s" podCreationTimestamp="2026-02-19 10:24:17 +0000 UTC" firstStartedPulling="2026-02-19 10:24:19.369743536 +0000 UTC m=+947.896208232" lastFinishedPulling="2026-02-19 10:24:39.433014078 +0000 UTC m=+967.959478764" observedRunningTime="2026-02-19 10:24:41.16909617 +0000 UTC m=+969.695560866" watchObservedRunningTime="2026-02-19 10:24:41.172499078 +0000 UTC m=+969.698963764" Feb 19 10:24:42 crc kubenswrapper[4811]: I0219 10:24:42.982513 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fn2cn"] Feb 19 10:24:42 crc kubenswrapper[4811]: E0219 10:24:42.982838 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e7e0a50-1704-4887-ab61-3e8e69345ab9" containerName="extract-utilities" Feb 19 10:24:42 crc kubenswrapper[4811]: I0219 10:24:42.982849 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e7e0a50-1704-4887-ab61-3e8e69345ab9" containerName="extract-utilities" Feb 19 10:24:42 crc kubenswrapper[4811]: E0219 10:24:42.982857 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e7e0a50-1704-4887-ab61-3e8e69345ab9" containerName="registry-server" Feb 19 10:24:42 crc kubenswrapper[4811]: I0219 10:24:42.982863 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e7e0a50-1704-4887-ab61-3e8e69345ab9" containerName="registry-server" Feb 19 10:24:42 crc kubenswrapper[4811]: E0219 10:24:42.982875 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e7e0a50-1704-4887-ab61-3e8e69345ab9" containerName="extract-content" Feb 19 10:24:42 crc kubenswrapper[4811]: I0219 10:24:42.982884 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e7e0a50-1704-4887-ab61-3e8e69345ab9" containerName="extract-content" Feb 19 10:24:42 crc kubenswrapper[4811]: I0219 10:24:42.983052 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e7e0a50-1704-4887-ab61-3e8e69345ab9" containerName="registry-server" Feb 19 10:24:42 crc kubenswrapper[4811]: I0219 10:24:42.984067 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fn2cn" Feb 19 10:24:42 crc kubenswrapper[4811]: I0219 10:24:42.989373 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fn2cn"] Feb 19 10:24:43 crc kubenswrapper[4811]: I0219 10:24:43.100673 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3fab660-6be9-439b-bfec-6b581e8b9c43-utilities\") pod \"redhat-marketplace-fn2cn\" (UID: \"d3fab660-6be9-439b-bfec-6b581e8b9c43\") " pod="openshift-marketplace/redhat-marketplace-fn2cn" Feb 19 10:24:43 crc kubenswrapper[4811]: I0219 10:24:43.101252 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3fab660-6be9-439b-bfec-6b581e8b9c43-catalog-content\") pod \"redhat-marketplace-fn2cn\" (UID: \"d3fab660-6be9-439b-bfec-6b581e8b9c43\") " pod="openshift-marketplace/redhat-marketplace-fn2cn" Feb 19 10:24:43 crc kubenswrapper[4811]: I0219 10:24:43.101511 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5tpk\" (UniqueName: \"kubernetes.io/projected/d3fab660-6be9-439b-bfec-6b581e8b9c43-kube-api-access-l5tpk\") pod \"redhat-marketplace-fn2cn\" (UID: \"d3fab660-6be9-439b-bfec-6b581e8b9c43\") " pod="openshift-marketplace/redhat-marketplace-fn2cn" Feb 19 10:24:43 crc kubenswrapper[4811]: I0219 10:24:43.203950 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3fab660-6be9-439b-bfec-6b581e8b9c43-catalog-content\") pod \"redhat-marketplace-fn2cn\" (UID: \"d3fab660-6be9-439b-bfec-6b581e8b9c43\") " pod="openshift-marketplace/redhat-marketplace-fn2cn" Feb 19 10:24:43 crc kubenswrapper[4811]: I0219 10:24:43.204060 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5tpk\" (UniqueName: \"kubernetes.io/projected/d3fab660-6be9-439b-bfec-6b581e8b9c43-kube-api-access-l5tpk\") pod \"redhat-marketplace-fn2cn\" (UID: \"d3fab660-6be9-439b-bfec-6b581e8b9c43\") " pod="openshift-marketplace/redhat-marketplace-fn2cn" Feb 19 10:24:43 crc kubenswrapper[4811]: I0219 10:24:43.204121 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3fab660-6be9-439b-bfec-6b581e8b9c43-utilities\") pod \"redhat-marketplace-fn2cn\" (UID: \"d3fab660-6be9-439b-bfec-6b581e8b9c43\") " pod="openshift-marketplace/redhat-marketplace-fn2cn" Feb 19 10:24:43 crc kubenswrapper[4811]: I0219 10:24:43.204584 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3fab660-6be9-439b-bfec-6b581e8b9c43-utilities\") pod \"redhat-marketplace-fn2cn\" (UID: \"d3fab660-6be9-439b-bfec-6b581e8b9c43\") " pod="openshift-marketplace/redhat-marketplace-fn2cn" Feb 19 10:24:43 crc kubenswrapper[4811]: I0219 10:24:43.204789 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3fab660-6be9-439b-bfec-6b581e8b9c43-catalog-content\") pod \"redhat-marketplace-fn2cn\" (UID: \"d3fab660-6be9-439b-bfec-6b581e8b9c43\") " pod="openshift-marketplace/redhat-marketplace-fn2cn" Feb 19 10:24:43 crc kubenswrapper[4811]: I0219 10:24:43.228702 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5tpk\" (UniqueName: \"kubernetes.io/projected/d3fab660-6be9-439b-bfec-6b581e8b9c43-kube-api-access-l5tpk\") pod \"redhat-marketplace-fn2cn\" (UID: \"d3fab660-6be9-439b-bfec-6b581e8b9c43\") " pod="openshift-marketplace/redhat-marketplace-fn2cn" Feb 19 10:24:43 crc kubenswrapper[4811]: I0219 10:24:43.310052 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fn2cn" Feb 19 10:24:43 crc kubenswrapper[4811]: I0219 10:24:43.918325 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fn2cn"] Feb 19 10:24:47 crc kubenswrapper[4811]: I0219 10:24:47.564561 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-s42cl" Feb 19 10:24:47 crc kubenswrapper[4811]: I0219 10:24:47.624906 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-dzgff" Feb 19 10:24:47 crc kubenswrapper[4811]: I0219 10:24:47.632626 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-wh6gz" Feb 19 10:24:47 crc kubenswrapper[4811]: I0219 10:24:47.801428 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-cjqjn" Feb 19 10:24:47 crc kubenswrapper[4811]: I0219 10:24:47.866212 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-d66mq" Feb 19 10:24:47 crc kubenswrapper[4811]: I0219 10:24:47.898529 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-frxgp" Feb 19 10:24:47 crc kubenswrapper[4811]: I0219 10:24:47.935208 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-f64jn" Feb 19 10:24:48 crc kubenswrapper[4811]: I0219 10:24:48.006801 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-rdtd4" Feb 19 10:24:48 crc kubenswrapper[4811]: I0219 10:24:48.023852 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-jzwmv" Feb 19 10:24:48 crc kubenswrapper[4811]: I0219 10:24:48.156099 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-h5g6x" Feb 19 10:24:48 crc kubenswrapper[4811]: I0219 10:24:48.168086 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-4tg5n" Feb 19 10:24:48 crc kubenswrapper[4811]: I0219 10:24:48.258670 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6f8zw" Feb 19 10:24:48 crc kubenswrapper[4811]: I0219 10:24:48.344934 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-8xp7b" Feb 19 10:24:48 crc kubenswrapper[4811]: I0219 10:24:48.376779 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-czvjr" Feb 19 10:24:49 crc kubenswrapper[4811]: I0219 10:24:49.351217 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58-cert\") pod \"infra-operator-controller-manager-79d975b745-bqpcc\" (UID: \"6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bqpcc" Feb 19 10:24:49 crc kubenswrapper[4811]: I0219 10:24:49.359096 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58-cert\") pod \"infra-operator-controller-manager-79d975b745-bqpcc\" (UID: \"6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bqpcc" Feb 19 10:24:49 crc kubenswrapper[4811]: I0219 10:24:49.554269 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/484c4e4b-0de6-427c-adde-b597fed189a7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv\" (UID: \"484c4e4b-0de6-427c-adde-b597fed189a7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv" Feb 19 10:24:49 crc kubenswrapper[4811]: I0219 10:24:49.562156 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/484c4e4b-0de6-427c-adde-b597fed189a7-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv\" (UID: \"484c4e4b-0de6-427c-adde-b597fed189a7\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv" Feb 19 10:24:49 crc kubenswrapper[4811]: I0219 10:24:49.596864 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-vdv5b" Feb 19 10:24:49 crc kubenswrapper[4811]: I0219 10:24:49.606082 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv" Feb 19 10:24:49 crc kubenswrapper[4811]: I0219 10:24:49.637992 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-n99n7" Feb 19 10:24:49 crc kubenswrapper[4811]: I0219 10:24:49.646158 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-bqpcc" Feb 19 10:24:50 crc kubenswrapper[4811]: I0219 10:24:50.060918 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-metrics-certs\") pod \"openstack-operator-controller-manager-7b558f465f-z4vhq\" (UID: \"7f4a5782-be5f-4a0d-be5d-b8a5014c4711\") " pod="openstack-operators/openstack-operator-controller-manager-7b558f465f-z4vhq" Feb 19 10:24:50 crc kubenswrapper[4811]: I0219 10:24:50.066505 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7f4a5782-be5f-4a0d-be5d-b8a5014c4711-metrics-certs\") pod \"openstack-operator-controller-manager-7b558f465f-z4vhq\" (UID: \"7f4a5782-be5f-4a0d-be5d-b8a5014c4711\") " pod="openstack-operators/openstack-operator-controller-manager-7b558f465f-z4vhq" Feb 19 10:24:50 crc kubenswrapper[4811]: I0219 10:24:50.309892 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-5vrfs" Feb 19 10:24:50 crc kubenswrapper[4811]: I0219 10:24:50.317840 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7b558f465f-z4vhq" Feb 19 10:24:53 crc kubenswrapper[4811]: I0219 10:24:53.558253 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv"] Feb 19 10:24:53 crc kubenswrapper[4811]: W0219 10:24:53.573269 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod484c4e4b_0de6_427c_adde_b597fed189a7.slice/crio-150f4102e4f20c81453d29207a228f0aa58947b2149a9c6152f7fcdd660b007b WatchSource:0}: Error finding container 150f4102e4f20c81453d29207a228f0aa58947b2149a9c6152f7fcdd660b007b: Status 404 returned error can't find the container with id 150f4102e4f20c81453d29207a228f0aa58947b2149a9c6152f7fcdd660b007b Feb 19 10:24:53 crc kubenswrapper[4811]: I0219 10:24:53.804847 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-bqpcc"] Feb 19 10:24:53 crc kubenswrapper[4811]: W0219 10:24:53.808686 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b1bf169_cf7e_488d_a6c3_ee3dc0b92c58.slice/crio-5ca05109d14670405f78a01ecc31c52d09cc2421dd3e2827d42e40024fb0d0cb WatchSource:0}: Error finding container 5ca05109d14670405f78a01ecc31c52d09cc2421dd3e2827d42e40024fb0d0cb: Status 404 returned error can't find the container with id 5ca05109d14670405f78a01ecc31c52d09cc2421dd3e2827d42e40024fb0d0cb Feb 19 10:24:53 crc kubenswrapper[4811]: I0219 10:24:53.868541 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7b558f465f-z4vhq"] Feb 19 10:24:53 crc kubenswrapper[4811]: W0219 10:24:53.873328 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f4a5782_be5f_4a0d_be5d_b8a5014c4711.slice/crio-4541508aad5c1f23ebe6accc5396ec067e9c4bbc9e541ac5e377050ad8f9792d WatchSource:0}: Error finding container 4541508aad5c1f23ebe6accc5396ec067e9c4bbc9e541ac5e377050ad8f9792d: Status 404 returned error can't find the container with id 4541508aad5c1f23ebe6accc5396ec067e9c4bbc9e541ac5e377050ad8f9792d Feb 19 10:24:53 crc kubenswrapper[4811]: E0219 10:24:53.887774 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 19 10:24:53 crc kubenswrapper[4811]: E0219 10:24:53.888040 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hq8nr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-qftq4_openshift-marketplace(8b733b4b-8597-484d-a6a6-b5635e80b483): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 10:24:53 crc kubenswrapper[4811]: E0219 10:24:53.889612 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-qftq4" podUID="8b733b4b-8597-484d-a6a6-b5635e80b483" Feb 19 10:24:54 crc kubenswrapper[4811]: I0219 10:24:54.158433 4811 generic.go:334] "Generic (PLEG): container finished" podID="d3fab660-6be9-439b-bfec-6b581e8b9c43" containerID="1d8cbafcf89be4cde1d891e98dd886b7b4aef03dec459736763d1a5e33458410" exitCode=0 Feb 19 10:24:54 crc kubenswrapper[4811]: I0219 10:24:54.158518 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fn2cn" event={"ID":"d3fab660-6be9-439b-bfec-6b581e8b9c43","Type":"ContainerDied","Data":"1d8cbafcf89be4cde1d891e98dd886b7b4aef03dec459736763d1a5e33458410"} Feb 19 10:24:54 crc kubenswrapper[4811]: I0219 10:24:54.159109 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fn2cn" event={"ID":"d3fab660-6be9-439b-bfec-6b581e8b9c43","Type":"ContainerStarted","Data":"40886a90c50bc1eb953ba417ebf39646dfeaab107006018a02d806e69845b83d"} Feb 19 10:24:54 crc kubenswrapper[4811]: I0219 10:24:54.160412 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7b558f465f-z4vhq" event={"ID":"7f4a5782-be5f-4a0d-be5d-b8a5014c4711","Type":"ContainerStarted","Data":"4541508aad5c1f23ebe6accc5396ec067e9c4bbc9e541ac5e377050ad8f9792d"} Feb 19 10:24:54 crc kubenswrapper[4811]: I0219 10:24:54.161847 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv" event={"ID":"484c4e4b-0de6-427c-adde-b597fed189a7","Type":"ContainerStarted","Data":"150f4102e4f20c81453d29207a228f0aa58947b2149a9c6152f7fcdd660b007b"} Feb 19 10:24:54 crc kubenswrapper[4811]: I0219 10:24:54.163688 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-bqpcc" event={"ID":"6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58","Type":"ContainerStarted","Data":"5ca05109d14670405f78a01ecc31c52d09cc2421dd3e2827d42e40024fb0d0cb"} Feb 19 10:24:54 crc kubenswrapper[4811]: E0219 10:24:54.164423 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qftq4" podUID="8b733b4b-8597-484d-a6a6-b5635e80b483" Feb 19 10:24:55 crc kubenswrapper[4811]: I0219 10:24:55.173315 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7b558f465f-z4vhq" event={"ID":"7f4a5782-be5f-4a0d-be5d-b8a5014c4711","Type":"ContainerStarted","Data":"c765061b667dc43fc992844a641f12eac7cd0977649e098bdb5c624024937927"} Feb 19 10:24:56 crc kubenswrapper[4811]: I0219 10:24:56.181712 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7b558f465f-z4vhq" Feb 19 10:24:56 crc kubenswrapper[4811]: I0219 10:24:56.221130 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7b558f465f-z4vhq" podStartSLOduration=39.221077348 podStartE2EDuration="39.221077348s" podCreationTimestamp="2026-02-19 10:24:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:24:56.216721617 +0000 UTC m=+984.743186343" watchObservedRunningTime="2026-02-19 10:24:56.221077348 +0000 UTC m=+984.747542074" Feb 19 10:24:56 crc kubenswrapper[4811]: I0219 10:24:56.788126 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:24:56 crc kubenswrapper[4811]: I0219 10:24:56.788250 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:24:59 crc kubenswrapper[4811]: I0219 10:24:59.214522 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-fvdc7" event={"ID":"b5e1b149-0349-4c1c-be95-b9c540da49f1","Type":"ContainerStarted","Data":"b9535dd5caa042a3c263b66e8510a996c512e839cccc76392bda94fde8918c2a"} Feb 19 10:24:59 crc kubenswrapper[4811]: I0219 10:24:59.215842 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-fvdc7" Feb 19 10:24:59 crc kubenswrapper[4811]: I0219 10:24:59.218234 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-456kj" event={"ID":"eab9f83b-209d-4688-8c7f-07840bae01ff","Type":"ContainerStarted","Data":"cfb91b2cc455053bfae9674b2a972eb691e536b42be1907e726f71d93ebb6544"} Feb 19 10:24:59 crc kubenswrapper[4811]: I0219 10:24:59.219185 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-456kj" Feb 19 10:24:59 crc kubenswrapper[4811]: I0219 10:24:59.222657 4811 generic.go:334] "Generic (PLEG): container finished" podID="d3fab660-6be9-439b-bfec-6b581e8b9c43" containerID="580fa930aac3b0855b1773ee871ef1cf25b73ac212d1b47a78f231517ba956b9" exitCode=0 Feb 19 10:24:59 crc kubenswrapper[4811]: I0219 10:24:59.222719 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fn2cn" event={"ID":"d3fab660-6be9-439b-bfec-6b581e8b9c43","Type":"ContainerDied","Data":"580fa930aac3b0855b1773ee871ef1cf25b73ac212d1b47a78f231517ba956b9"} Feb 19 10:24:59 crc kubenswrapper[4811]: I0219 10:24:59.226617 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9fqp5" event={"ID":"5bf4b28e-b5bd-4caa-a07e-b35c20fa16e7","Type":"ContainerStarted","Data":"5c0c2db14e82616a76d3ca7bbce6a0f22b485a25a992f7427d9818cb2cad3910"} Feb 19 10:24:59 crc kubenswrapper[4811]: I0219 10:24:59.226842 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9fqp5" Feb 19 10:24:59 crc kubenswrapper[4811]: I0219 10:24:59.234924 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-fvdc7" podStartSLOduration=3.163987928 podStartE2EDuration="42.234903476s" podCreationTimestamp="2026-02-19 10:24:17 +0000 UTC" firstStartedPulling="2026-02-19 10:24:18.8333502 +0000 UTC m=+947.359814876" lastFinishedPulling="2026-02-19 10:24:57.904265738 +0000 UTC m=+986.430730424" observedRunningTime="2026-02-19 10:24:59.230838302 +0000 UTC m=+987.757303008" watchObservedRunningTime="2026-02-19 10:24:59.234903476 +0000 UTC m=+987.761368162" Feb 19 10:24:59 crc kubenswrapper[4811]: I0219 10:24:59.242931 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-c84zh" event={"ID":"f7a1ee5c-f40c-4b32-9ff9-46d3a11f64a3","Type":"ContainerStarted","Data":"d215d0b8641152665a336e9aff54bee3e363ca9a4e0e5974748c99a1307c4957"} Feb 19 10:24:59 crc kubenswrapper[4811]: I0219 10:24:59.243641 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-c84zh" Feb 19 10:24:59 crc kubenswrapper[4811]: I0219 10:24:59.245697 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-2lvw2" event={"ID":"6c3aab50-1956-41f1-be69-aa976020f25e","Type":"ContainerStarted","Data":"09787294a2ce6fb83ac7799f97fb4a23ebb1e38fbb148d38eaa7b0cec540d228"} Feb 19 10:24:59 crc kubenswrapper[4811]: I0219 10:24:59.246463 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-2lvw2" Feb 19 10:24:59 crc kubenswrapper[4811]: I0219 10:24:59.257119 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-456kj" podStartSLOduration=3.644588116 podStartE2EDuration="42.257092012s" podCreationTimestamp="2026-02-19 10:24:17 +0000 UTC" firstStartedPulling="2026-02-19 10:24:19.359686725 +0000 UTC m=+947.886151411" lastFinishedPulling="2026-02-19 10:24:57.972190631 +0000 UTC m=+986.498655307" observedRunningTime="2026-02-19 10:24:59.249234721 +0000 UTC m=+987.775699417" watchObservedRunningTime="2026-02-19 10:24:59.257092012 +0000 UTC m=+987.783556698" Feb 19 10:24:59 crc kubenswrapper[4811]: I0219 10:24:59.288836 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9fqp5" podStartSLOduration=2.982003882 podStartE2EDuration="42.288816131s" podCreationTimestamp="2026-02-19 10:24:17 +0000 UTC" firstStartedPulling="2026-02-19 10:24:18.597068709 +0000 UTC m=+947.123533385" lastFinishedPulling="2026-02-19 10:24:57.903880948 +0000 UTC m=+986.430345634" observedRunningTime="2026-02-19 10:24:59.286510032 +0000 UTC m=+987.812974728" watchObservedRunningTime="2026-02-19 10:24:59.288816131 +0000 UTC m=+987.815280817" Feb 19 10:24:59 crc kubenswrapper[4811]: I0219 10:24:59.308335 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-c84zh" podStartSLOduration=2.84467787 podStartE2EDuration="42.308316638s" podCreationTimestamp="2026-02-19 10:24:17 +0000 UTC" firstStartedPulling="2026-02-19 10:24:18.438826554 +0000 UTC m=+946.965291240" lastFinishedPulling="2026-02-19 10:24:57.902465322 +0000 UTC m=+986.428930008" observedRunningTime="2026-02-19 10:24:59.302392857 +0000 UTC m=+987.828857543" watchObservedRunningTime="2026-02-19 10:24:59.308316638 +0000 UTC m=+987.834781324" Feb 19 10:24:59 crc kubenswrapper[4811]: I0219 10:24:59.324109 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-2lvw2" podStartSLOduration=3.782386491 podStartE2EDuration="42.324085461s" podCreationTimestamp="2026-02-19 10:24:17 +0000 UTC" firstStartedPulling="2026-02-19 10:24:19.360736242 +0000 UTC m=+947.887200928" lastFinishedPulling="2026-02-19 10:24:57.902435212 +0000 UTC m=+986.428899898" observedRunningTime="2026-02-19 10:24:59.317204545 +0000 UTC m=+987.843669231" watchObservedRunningTime="2026-02-19 10:24:59.324085461 +0000 UTC m=+987.850550157" Feb 19 10:25:00 crc kubenswrapper[4811]: I0219 10:25:00.325876 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7b558f465f-z4vhq" Feb 19 10:25:02 crc kubenswrapper[4811]: I0219 10:25:02.273025 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv" event={"ID":"484c4e4b-0de6-427c-adde-b597fed189a7","Type":"ContainerStarted","Data":"51b66b383ebbbf71e4cec9be9fb0d2d5ecf4cc771d413853397e935b03af9d6b"} Feb 19 10:25:02 crc kubenswrapper[4811]: I0219 10:25:02.273505 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv" Feb 19 10:25:02 crc kubenswrapper[4811]: I0219 10:25:02.274966 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-bqpcc" event={"ID":"6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58","Type":"ContainerStarted","Data":"d43e3c7fc1761bc6a2856ff2503654ec477b1ba6a0f81c5015c8887e8cdd8bfd"} Feb 19 10:25:02 crc kubenswrapper[4811]: I0219 10:25:02.275101 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-bqpcc" Feb 19 10:25:02 crc kubenswrapper[4811]: I0219 10:25:02.277948 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fn2cn" event={"ID":"d3fab660-6be9-439b-bfec-6b581e8b9c43","Type":"ContainerStarted","Data":"9b7ef39261b8b2b904ce22e15fb73609c42095b027961223bd8e98a273456106"} Feb 19 10:25:02 crc kubenswrapper[4811]: I0219 10:25:02.309682 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv" podStartSLOduration=37.775425977 podStartE2EDuration="45.309646148s" podCreationTimestamp="2026-02-19 10:24:17 +0000 UTC" firstStartedPulling="2026-02-19 10:24:53.579181758 +0000 UTC m=+982.105646444" lastFinishedPulling="2026-02-19 10:25:01.113401929 +0000 UTC m=+989.639866615" observedRunningTime="2026-02-19 10:25:02.299502069 +0000 UTC m=+990.825966765" watchObservedRunningTime="2026-02-19 10:25:02.309646148 +0000 UTC m=+990.836110844" Feb 19 10:25:02 crc kubenswrapper[4811]: I0219 10:25:02.324284 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-bqpcc" podStartSLOduration=38.024337566 podStartE2EDuration="45.32424838s" podCreationTimestamp="2026-02-19 10:24:17 +0000 UTC" firstStartedPulling="2026-02-19 10:24:53.812241033 +0000 UTC m=+982.338705719" lastFinishedPulling="2026-02-19 10:25:01.112151847 +0000 UTC m=+989.638616533" observedRunningTime="2026-02-19 10:25:02.319961481 +0000 UTC m=+990.846426177" watchObservedRunningTime="2026-02-19 10:25:02.32424838 +0000 UTC m=+990.850713066" Feb 19 10:25:02 crc kubenswrapper[4811]: I0219 10:25:02.354851 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fn2cn" podStartSLOduration=14.417829107 podStartE2EDuration="20.35482499s" podCreationTimestamp="2026-02-19 10:24:42 +0000 UTC" firstStartedPulling="2026-02-19 10:24:55.175134634 +0000 UTC m=+983.701599320" lastFinishedPulling="2026-02-19 10:25:01.112130517 +0000 UTC m=+989.638595203" observedRunningTime="2026-02-19 10:25:02.35284526 +0000 UTC m=+990.879309966" watchObservedRunningTime="2026-02-19 10:25:02.35482499 +0000 UTC m=+990.881289686" Feb 19 10:25:03 crc kubenswrapper[4811]: I0219 10:25:03.311119 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fn2cn" Feb 19 10:25:03 crc kubenswrapper[4811]: I0219 10:25:03.312077 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fn2cn" Feb 19 10:25:03 crc kubenswrapper[4811]: I0219 10:25:03.363429 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fn2cn" Feb 19 10:25:05 crc kubenswrapper[4811]: I0219 10:25:05.586687 4811 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 10:25:07 crc kubenswrapper[4811]: I0219 10:25:07.323278 4811 generic.go:334] "Generic (PLEG): container finished" podID="8b733b4b-8597-484d-a6a6-b5635e80b483" containerID="8f0f2062e385e3845305566f1e1dfcc7d9eff715210c59b56cbe75eff665590b" exitCode=0 Feb 19 10:25:07 crc kubenswrapper[4811]: I0219 10:25:07.323354 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qftq4" event={"ID":"8b733b4b-8597-484d-a6a6-b5635e80b483","Type":"ContainerDied","Data":"8f0f2062e385e3845305566f1e1dfcc7d9eff715210c59b56cbe75eff665590b"} Feb 19 10:25:07 crc kubenswrapper[4811]: I0219 10:25:07.595969 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-c84zh" Feb 19 10:25:07 crc kubenswrapper[4811]: I0219 10:25:07.705042 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-9fqp5" Feb 19 10:25:07 crc kubenswrapper[4811]: I0219 10:25:07.972727 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-fvdc7" Feb 19 10:25:08 crc kubenswrapper[4811]: I0219 10:25:08.009051 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-2lvw2" Feb 19 10:25:08 crc kubenswrapper[4811]: I0219 10:25:08.281351 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-456kj" Feb 19 10:25:08 crc kubenswrapper[4811]: I0219 10:25:08.335190 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qftq4" event={"ID":"8b733b4b-8597-484d-a6a6-b5635e80b483","Type":"ContainerStarted","Data":"e2582493fc43bcddf97035c657c2112bfaf87d2b56d28bb6fc7f33b12691b7ea"} Feb 19 10:25:08 crc kubenswrapper[4811]: I0219 10:25:08.355952 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qftq4" podStartSLOduration=15.585533335000001 podStartE2EDuration="43.355930179s" podCreationTimestamp="2026-02-19 10:24:25 +0000 UTC" firstStartedPulling="2026-02-19 10:24:39.974972849 +0000 UTC m=+968.501437535" lastFinishedPulling="2026-02-19 10:25:07.745369693 +0000 UTC m=+996.271834379" observedRunningTime="2026-02-19 10:25:08.350692055 +0000 UTC m=+996.877156751" watchObservedRunningTime="2026-02-19 10:25:08.355930179 +0000 UTC m=+996.882394865" Feb 19 10:25:09 crc kubenswrapper[4811]: I0219 10:25:09.624810 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv" Feb 19 10:25:09 crc kubenswrapper[4811]: I0219 10:25:09.655777 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-bqpcc" Feb 19 10:25:13 crc kubenswrapper[4811]: I0219 10:25:13.357903 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fn2cn" Feb 19 10:25:13 crc kubenswrapper[4811]: I0219 10:25:13.414359 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fn2cn"] Feb 19 10:25:13 crc kubenswrapper[4811]: I0219 10:25:13.414607 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fn2cn" podUID="d3fab660-6be9-439b-bfec-6b581e8b9c43" containerName="registry-server" containerID="cri-o://9b7ef39261b8b2b904ce22e15fb73609c42095b027961223bd8e98a273456106" gracePeriod=2 Feb 19 10:25:13 crc kubenswrapper[4811]: I0219 10:25:13.840281 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fn2cn" Feb 19 10:25:14 crc kubenswrapper[4811]: I0219 10:25:14.002935 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3fab660-6be9-439b-bfec-6b581e8b9c43-catalog-content\") pod \"d3fab660-6be9-439b-bfec-6b581e8b9c43\" (UID: \"d3fab660-6be9-439b-bfec-6b581e8b9c43\") " Feb 19 10:25:14 crc kubenswrapper[4811]: I0219 10:25:14.002998 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3fab660-6be9-439b-bfec-6b581e8b9c43-utilities\") pod \"d3fab660-6be9-439b-bfec-6b581e8b9c43\" (UID: \"d3fab660-6be9-439b-bfec-6b581e8b9c43\") " Feb 19 10:25:14 crc kubenswrapper[4811]: I0219 10:25:14.003050 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5tpk\" (UniqueName: \"kubernetes.io/projected/d3fab660-6be9-439b-bfec-6b581e8b9c43-kube-api-access-l5tpk\") pod \"d3fab660-6be9-439b-bfec-6b581e8b9c43\" (UID: \"d3fab660-6be9-439b-bfec-6b581e8b9c43\") " Feb 19 10:25:14 crc kubenswrapper[4811]: I0219 10:25:14.003878 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3fab660-6be9-439b-bfec-6b581e8b9c43-utilities" (OuterVolumeSpecName: "utilities") pod "d3fab660-6be9-439b-bfec-6b581e8b9c43" (UID: "d3fab660-6be9-439b-bfec-6b581e8b9c43"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:25:14 crc kubenswrapper[4811]: I0219 10:25:14.010641 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3fab660-6be9-439b-bfec-6b581e8b9c43-kube-api-access-l5tpk" (OuterVolumeSpecName: "kube-api-access-l5tpk") pod "d3fab660-6be9-439b-bfec-6b581e8b9c43" (UID: "d3fab660-6be9-439b-bfec-6b581e8b9c43"). InnerVolumeSpecName "kube-api-access-l5tpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:25:14 crc kubenswrapper[4811]: I0219 10:25:14.028676 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3fab660-6be9-439b-bfec-6b581e8b9c43-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3fab660-6be9-439b-bfec-6b581e8b9c43" (UID: "d3fab660-6be9-439b-bfec-6b581e8b9c43"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:25:14 crc kubenswrapper[4811]: I0219 10:25:14.105343 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3fab660-6be9-439b-bfec-6b581e8b9c43-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:25:14 crc kubenswrapper[4811]: I0219 10:25:14.105673 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3fab660-6be9-439b-bfec-6b581e8b9c43-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:25:14 crc kubenswrapper[4811]: I0219 10:25:14.105802 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5tpk\" (UniqueName: \"kubernetes.io/projected/d3fab660-6be9-439b-bfec-6b581e8b9c43-kube-api-access-l5tpk\") on node \"crc\" DevicePath \"\"" Feb 19 10:25:14 crc kubenswrapper[4811]: I0219 10:25:14.381483 4811 generic.go:334] "Generic (PLEG): container finished" podID="d3fab660-6be9-439b-bfec-6b581e8b9c43" containerID="9b7ef39261b8b2b904ce22e15fb73609c42095b027961223bd8e98a273456106" exitCode=0 Feb 19 10:25:14 crc kubenswrapper[4811]: I0219 10:25:14.381690 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fn2cn" event={"ID":"d3fab660-6be9-439b-bfec-6b581e8b9c43","Type":"ContainerDied","Data":"9b7ef39261b8b2b904ce22e15fb73609c42095b027961223bd8e98a273456106"} Feb 19 10:25:14 crc kubenswrapper[4811]: I0219 10:25:14.383342 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fn2cn" event={"ID":"d3fab660-6be9-439b-bfec-6b581e8b9c43","Type":"ContainerDied","Data":"40886a90c50bc1eb953ba417ebf39646dfeaab107006018a02d806e69845b83d"} Feb 19 10:25:14 crc kubenswrapper[4811]: I0219 10:25:14.383503 4811 scope.go:117] "RemoveContainer" containerID="9b7ef39261b8b2b904ce22e15fb73609c42095b027961223bd8e98a273456106" Feb 19 10:25:14 crc kubenswrapper[4811]: I0219 10:25:14.381799 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fn2cn" Feb 19 10:25:14 crc kubenswrapper[4811]: I0219 10:25:14.416403 4811 scope.go:117] "RemoveContainer" containerID="580fa930aac3b0855b1773ee871ef1cf25b73ac212d1b47a78f231517ba956b9" Feb 19 10:25:14 crc kubenswrapper[4811]: I0219 10:25:14.433736 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fn2cn"] Feb 19 10:25:14 crc kubenswrapper[4811]: I0219 10:25:14.449721 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fn2cn"] Feb 19 10:25:14 crc kubenswrapper[4811]: I0219 10:25:14.455828 4811 scope.go:117] "RemoveContainer" containerID="1d8cbafcf89be4cde1d891e98dd886b7b4aef03dec459736763d1a5e33458410" Feb 19 10:25:14 crc kubenswrapper[4811]: I0219 10:25:14.475692 4811 scope.go:117] "RemoveContainer" containerID="9b7ef39261b8b2b904ce22e15fb73609c42095b027961223bd8e98a273456106" Feb 19 10:25:14 crc kubenswrapper[4811]: E0219 10:25:14.476290 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b7ef39261b8b2b904ce22e15fb73609c42095b027961223bd8e98a273456106\": container with ID starting with 9b7ef39261b8b2b904ce22e15fb73609c42095b027961223bd8e98a273456106 not found: ID does not exist" containerID="9b7ef39261b8b2b904ce22e15fb73609c42095b027961223bd8e98a273456106" Feb 19 10:25:14 crc kubenswrapper[4811]: I0219 10:25:14.476331 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b7ef39261b8b2b904ce22e15fb73609c42095b027961223bd8e98a273456106"} err="failed to get container status \"9b7ef39261b8b2b904ce22e15fb73609c42095b027961223bd8e98a273456106\": rpc error: code = NotFound desc = could not find container \"9b7ef39261b8b2b904ce22e15fb73609c42095b027961223bd8e98a273456106\": container with ID starting with 9b7ef39261b8b2b904ce22e15fb73609c42095b027961223bd8e98a273456106 not found: ID does not exist" Feb 19 10:25:14 crc kubenswrapper[4811]: I0219 10:25:14.476365 4811 scope.go:117] "RemoveContainer" containerID="580fa930aac3b0855b1773ee871ef1cf25b73ac212d1b47a78f231517ba956b9" Feb 19 10:25:14 crc kubenswrapper[4811]: E0219 10:25:14.476885 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"580fa930aac3b0855b1773ee871ef1cf25b73ac212d1b47a78f231517ba956b9\": container with ID starting with 580fa930aac3b0855b1773ee871ef1cf25b73ac212d1b47a78f231517ba956b9 not found: ID does not exist" containerID="580fa930aac3b0855b1773ee871ef1cf25b73ac212d1b47a78f231517ba956b9" Feb 19 10:25:14 crc kubenswrapper[4811]: I0219 10:25:14.476917 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"580fa930aac3b0855b1773ee871ef1cf25b73ac212d1b47a78f231517ba956b9"} err="failed to get container status \"580fa930aac3b0855b1773ee871ef1cf25b73ac212d1b47a78f231517ba956b9\": rpc error: code = NotFound desc = could not find container \"580fa930aac3b0855b1773ee871ef1cf25b73ac212d1b47a78f231517ba956b9\": container with ID starting with 580fa930aac3b0855b1773ee871ef1cf25b73ac212d1b47a78f231517ba956b9 not found: ID does not exist" Feb 19 10:25:14 crc kubenswrapper[4811]: I0219 10:25:14.476936 4811 scope.go:117] "RemoveContainer" containerID="1d8cbafcf89be4cde1d891e98dd886b7b4aef03dec459736763d1a5e33458410" Feb 19 10:25:14 crc kubenswrapper[4811]: E0219 10:25:14.477178 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d8cbafcf89be4cde1d891e98dd886b7b4aef03dec459736763d1a5e33458410\": container with ID starting with 1d8cbafcf89be4cde1d891e98dd886b7b4aef03dec459736763d1a5e33458410 not found: ID does not exist" containerID="1d8cbafcf89be4cde1d891e98dd886b7b4aef03dec459736763d1a5e33458410" Feb 19 10:25:14 crc kubenswrapper[4811]: I0219 10:25:14.477201 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d8cbafcf89be4cde1d891e98dd886b7b4aef03dec459736763d1a5e33458410"} err="failed to get container status \"1d8cbafcf89be4cde1d891e98dd886b7b4aef03dec459736763d1a5e33458410\": rpc error: code = NotFound desc = could not find container \"1d8cbafcf89be4cde1d891e98dd886b7b4aef03dec459736763d1a5e33458410\": container with ID starting with 1d8cbafcf89be4cde1d891e98dd886b7b4aef03dec459736763d1a5e33458410 not found: ID does not exist" Feb 19 10:25:14 crc kubenswrapper[4811]: I0219 10:25:14.593880 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3fab660-6be9-439b-bfec-6b581e8b9c43" path="/var/lib/kubelet/pods/d3fab660-6be9-439b-bfec-6b581e8b9c43/volumes" Feb 19 10:25:16 crc kubenswrapper[4811]: I0219 10:25:16.222977 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qftq4" Feb 19 10:25:16 crc kubenswrapper[4811]: I0219 10:25:16.223075 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qftq4" Feb 19 10:25:16 crc kubenswrapper[4811]: I0219 10:25:16.283769 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qftq4" Feb 19 10:25:16 crc kubenswrapper[4811]: I0219 10:25:16.458069 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qftq4" Feb 19 10:25:16 crc kubenswrapper[4811]: I0219 10:25:16.830249 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qftq4"] Feb 19 10:25:17 crc kubenswrapper[4811]: I0219 10:25:16.999335 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h2c5f"] Feb 19 10:25:17 crc kubenswrapper[4811]: I0219 10:25:17.000811 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h2c5f" podUID="5045e58a-5dec-4419-823e-479ca95e924f" containerName="registry-server" containerID="cri-o://640839e98f4c84982d91b4f9de5d2c5f5a4b172e30758affe889ea8e417ab703" gracePeriod=2 Feb 19 10:25:18 crc kubenswrapper[4811]: E0219 10:25:18.279536 4811 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 640839e98f4c84982d91b4f9de5d2c5f5a4b172e30758affe889ea8e417ab703 is running failed: container process not found" containerID="640839e98f4c84982d91b4f9de5d2c5f5a4b172e30758affe889ea8e417ab703" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 10:25:18 crc kubenswrapper[4811]: E0219 10:25:18.280550 4811 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 640839e98f4c84982d91b4f9de5d2c5f5a4b172e30758affe889ea8e417ab703 is running failed: container process not found" containerID="640839e98f4c84982d91b4f9de5d2c5f5a4b172e30758affe889ea8e417ab703" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 10:25:18 crc kubenswrapper[4811]: E0219 10:25:18.280901 4811 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 640839e98f4c84982d91b4f9de5d2c5f5a4b172e30758affe889ea8e417ab703 is running failed: container process not found" containerID="640839e98f4c84982d91b4f9de5d2c5f5a4b172e30758affe889ea8e417ab703" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 10:25:18 crc kubenswrapper[4811]: E0219 10:25:18.280943 4811 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 640839e98f4c84982d91b4f9de5d2c5f5a4b172e30758affe889ea8e417ab703 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-h2c5f" podUID="5045e58a-5dec-4419-823e-479ca95e924f" containerName="registry-server" Feb 19 10:25:18 crc kubenswrapper[4811]: I0219 10:25:18.420141 4811 generic.go:334] "Generic (PLEG): container finished" podID="5045e58a-5dec-4419-823e-479ca95e924f" containerID="640839e98f4c84982d91b4f9de5d2c5f5a4b172e30758affe889ea8e417ab703" exitCode=0 Feb 19 10:25:18 crc kubenswrapper[4811]: I0219 10:25:18.420223 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h2c5f" event={"ID":"5045e58a-5dec-4419-823e-479ca95e924f","Type":"ContainerDied","Data":"640839e98f4c84982d91b4f9de5d2c5f5a4b172e30758affe889ea8e417ab703"} Feb 19 10:25:18 crc kubenswrapper[4811]: I0219 10:25:18.882099 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h2c5f" Feb 19 10:25:18 crc kubenswrapper[4811]: I0219 10:25:18.910368 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nzdz\" (UniqueName: \"kubernetes.io/projected/5045e58a-5dec-4419-823e-479ca95e924f-kube-api-access-8nzdz\") pod \"5045e58a-5dec-4419-823e-479ca95e924f\" (UID: \"5045e58a-5dec-4419-823e-479ca95e924f\") " Feb 19 10:25:18 crc kubenswrapper[4811]: I0219 10:25:18.910538 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5045e58a-5dec-4419-823e-479ca95e924f-utilities\") pod \"5045e58a-5dec-4419-823e-479ca95e924f\" (UID: \"5045e58a-5dec-4419-823e-479ca95e924f\") " Feb 19 10:25:18 crc kubenswrapper[4811]: I0219 10:25:18.910599 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5045e58a-5dec-4419-823e-479ca95e924f-catalog-content\") pod \"5045e58a-5dec-4419-823e-479ca95e924f\" (UID: \"5045e58a-5dec-4419-823e-479ca95e924f\") " Feb 19 10:25:18 crc kubenswrapper[4811]: I0219 10:25:18.912084 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5045e58a-5dec-4419-823e-479ca95e924f-utilities" (OuterVolumeSpecName: "utilities") pod "5045e58a-5dec-4419-823e-479ca95e924f" (UID: "5045e58a-5dec-4419-823e-479ca95e924f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:25:18 crc kubenswrapper[4811]: I0219 10:25:18.922687 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5045e58a-5dec-4419-823e-479ca95e924f-kube-api-access-8nzdz" (OuterVolumeSpecName: "kube-api-access-8nzdz") pod "5045e58a-5dec-4419-823e-479ca95e924f" (UID: "5045e58a-5dec-4419-823e-479ca95e924f"). InnerVolumeSpecName "kube-api-access-8nzdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:25:18 crc kubenswrapper[4811]: I0219 10:25:18.972574 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5045e58a-5dec-4419-823e-479ca95e924f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5045e58a-5dec-4419-823e-479ca95e924f" (UID: "5045e58a-5dec-4419-823e-479ca95e924f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:25:19 crc kubenswrapper[4811]: I0219 10:25:19.012593 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5045e58a-5dec-4419-823e-479ca95e924f-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:25:19 crc kubenswrapper[4811]: I0219 10:25:19.012631 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5045e58a-5dec-4419-823e-479ca95e924f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:25:19 crc kubenswrapper[4811]: I0219 10:25:19.012648 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nzdz\" (UniqueName: \"kubernetes.io/projected/5045e58a-5dec-4419-823e-479ca95e924f-kube-api-access-8nzdz\") on node \"crc\" DevicePath \"\"" Feb 19 10:25:19 crc kubenswrapper[4811]: I0219 10:25:19.432485 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h2c5f" event={"ID":"5045e58a-5dec-4419-823e-479ca95e924f","Type":"ContainerDied","Data":"b4a76a93ed744849719a33a73405e259ca614990a66c7f03d256e56743b4c33f"} Feb 19 10:25:19 crc kubenswrapper[4811]: I0219 10:25:19.432553 4811 scope.go:117] "RemoveContainer" containerID="640839e98f4c84982d91b4f9de5d2c5f5a4b172e30758affe889ea8e417ab703" Feb 19 10:25:19 crc kubenswrapper[4811]: I0219 10:25:19.432594 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h2c5f" Feb 19 10:25:19 crc kubenswrapper[4811]: I0219 10:25:19.468326 4811 scope.go:117] "RemoveContainer" containerID="211a80ac549df8588f3b25f8416e4cf6797278b725b9b5d1eb870fda22f15248" Feb 19 10:25:19 crc kubenswrapper[4811]: I0219 10:25:19.489671 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h2c5f"] Feb 19 10:25:19 crc kubenswrapper[4811]: I0219 10:25:19.489735 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h2c5f"] Feb 19 10:25:19 crc kubenswrapper[4811]: I0219 10:25:19.503216 4811 scope.go:117] "RemoveContainer" containerID="f7f77e6130c3ce0b95ceb069fec52d9a63c1190f0ed5e64e1a0750e3b8a11eb1" Feb 19 10:25:20 crc kubenswrapper[4811]: I0219 10:25:20.592604 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5045e58a-5dec-4419-823e-479ca95e924f" path="/var/lib/kubelet/pods/5045e58a-5dec-4419-823e-479ca95e924f/volumes" Feb 19 10:25:25 crc kubenswrapper[4811]: I0219 10:25:25.759275 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rrkfr"] Feb 19 10:25:25 crc kubenswrapper[4811]: E0219 10:25:25.762135 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3fab660-6be9-439b-bfec-6b581e8b9c43" containerName="registry-server" Feb 19 10:25:25 crc kubenswrapper[4811]: I0219 10:25:25.762154 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3fab660-6be9-439b-bfec-6b581e8b9c43" containerName="registry-server" Feb 19 10:25:25 crc kubenswrapper[4811]: E0219 10:25:25.762175 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5045e58a-5dec-4419-823e-479ca95e924f" containerName="registry-server" Feb 19 10:25:25 crc kubenswrapper[4811]: I0219 10:25:25.762203 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="5045e58a-5dec-4419-823e-479ca95e924f" containerName="registry-server" Feb 19 10:25:25 crc kubenswrapper[4811]: E0219 10:25:25.762212 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3fab660-6be9-439b-bfec-6b581e8b9c43" containerName="extract-utilities" Feb 19 10:25:25 crc kubenswrapper[4811]: I0219 10:25:25.762219 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3fab660-6be9-439b-bfec-6b581e8b9c43" containerName="extract-utilities" Feb 19 10:25:25 crc kubenswrapper[4811]: E0219 10:25:25.762231 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5045e58a-5dec-4419-823e-479ca95e924f" containerName="extract-content" Feb 19 10:25:25 crc kubenswrapper[4811]: I0219 10:25:25.762238 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="5045e58a-5dec-4419-823e-479ca95e924f" containerName="extract-content" Feb 19 10:25:25 crc kubenswrapper[4811]: E0219 10:25:25.762268 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5045e58a-5dec-4419-823e-479ca95e924f" containerName="extract-utilities" Feb 19 10:25:25 crc kubenswrapper[4811]: I0219 10:25:25.762276 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="5045e58a-5dec-4419-823e-479ca95e924f" containerName="extract-utilities" Feb 19 10:25:25 crc kubenswrapper[4811]: E0219 10:25:25.762289 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3fab660-6be9-439b-bfec-6b581e8b9c43" containerName="extract-content" Feb 19 10:25:25 crc kubenswrapper[4811]: I0219 10:25:25.762294 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3fab660-6be9-439b-bfec-6b581e8b9c43" containerName="extract-content" Feb 19 10:25:25 crc kubenswrapper[4811]: I0219 10:25:25.762440 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="5045e58a-5dec-4419-823e-479ca95e924f" containerName="registry-server" Feb 19 10:25:25 crc kubenswrapper[4811]: I0219 10:25:25.762479 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3fab660-6be9-439b-bfec-6b581e8b9c43" containerName="registry-server" Feb 19 10:25:25 crc kubenswrapper[4811]: I0219 10:25:25.763363 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-rrkfr" Feb 19 10:25:25 crc kubenswrapper[4811]: I0219 10:25:25.767460 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-7hbx5" Feb 19 10:25:25 crc kubenswrapper[4811]: I0219 10:25:25.767927 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 19 10:25:25 crc kubenswrapper[4811]: I0219 10:25:25.768839 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 19 10:25:25 crc kubenswrapper[4811]: I0219 10:25:25.769156 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 19 10:25:25 crc kubenswrapper[4811]: I0219 10:25:25.790622 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rrkfr"] Feb 19 10:25:25 crc kubenswrapper[4811]: I0219 10:25:25.835870 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhpqg\" (UniqueName: \"kubernetes.io/projected/0adfb997-3e8d-45cf-bf14-bc6f3ce105b9-kube-api-access-xhpqg\") pod \"dnsmasq-dns-675f4bcbfc-rrkfr\" (UID: \"0adfb997-3e8d-45cf-bf14-bc6f3ce105b9\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rrkfr" Feb 19 10:25:25 crc kubenswrapper[4811]: I0219 10:25:25.837568 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0adfb997-3e8d-45cf-bf14-bc6f3ce105b9-config\") pod \"dnsmasq-dns-675f4bcbfc-rrkfr\" (UID: \"0adfb997-3e8d-45cf-bf14-bc6f3ce105b9\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rrkfr" Feb 19 10:25:25 crc kubenswrapper[4811]: I0219 10:25:25.852034 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-tb8bf"] Feb 19 10:25:25 crc kubenswrapper[4811]: I0219 10:25:25.853562 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-tb8bf" Feb 19 10:25:25 crc kubenswrapper[4811]: I0219 10:25:25.859687 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-tb8bf"] Feb 19 10:25:25 crc kubenswrapper[4811]: I0219 10:25:25.859884 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 19 10:25:25 crc kubenswrapper[4811]: I0219 10:25:25.939159 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2pqs\" (UniqueName: \"kubernetes.io/projected/1a6f078c-1170-4671-bf42-ce21332d2a3c-kube-api-access-q2pqs\") pod \"dnsmasq-dns-78dd6ddcc-tb8bf\" (UID: \"1a6f078c-1170-4671-bf42-ce21332d2a3c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-tb8bf" Feb 19 10:25:25 crc kubenswrapper[4811]: I0219 10:25:25.939433 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhpqg\" (UniqueName: \"kubernetes.io/projected/0adfb997-3e8d-45cf-bf14-bc6f3ce105b9-kube-api-access-xhpqg\") pod \"dnsmasq-dns-675f4bcbfc-rrkfr\" (UID: \"0adfb997-3e8d-45cf-bf14-bc6f3ce105b9\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rrkfr" Feb 19 10:25:25 crc kubenswrapper[4811]: I0219 10:25:25.939596 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a6f078c-1170-4671-bf42-ce21332d2a3c-config\") pod \"dnsmasq-dns-78dd6ddcc-tb8bf\" (UID: \"1a6f078c-1170-4671-bf42-ce21332d2a3c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-tb8bf" Feb 19 10:25:25 crc kubenswrapper[4811]: I0219 10:25:25.939671 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0adfb997-3e8d-45cf-bf14-bc6f3ce105b9-config\") pod \"dnsmasq-dns-675f4bcbfc-rrkfr\" (UID: \"0adfb997-3e8d-45cf-bf14-bc6f3ce105b9\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rrkfr" Feb 19 10:25:25 crc kubenswrapper[4811]: I0219 10:25:25.939742 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a6f078c-1170-4671-bf42-ce21332d2a3c-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-tb8bf\" (UID: \"1a6f078c-1170-4671-bf42-ce21332d2a3c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-tb8bf" Feb 19 10:25:25 crc kubenswrapper[4811]: I0219 10:25:25.940644 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0adfb997-3e8d-45cf-bf14-bc6f3ce105b9-config\") pod \"dnsmasq-dns-675f4bcbfc-rrkfr\" (UID: \"0adfb997-3e8d-45cf-bf14-bc6f3ce105b9\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rrkfr" Feb 19 10:25:25 crc kubenswrapper[4811]: I0219 10:25:25.957127 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhpqg\" (UniqueName: \"kubernetes.io/projected/0adfb997-3e8d-45cf-bf14-bc6f3ce105b9-kube-api-access-xhpqg\") pod \"dnsmasq-dns-675f4bcbfc-rrkfr\" (UID: \"0adfb997-3e8d-45cf-bf14-bc6f3ce105b9\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rrkfr" Feb 19 10:25:26 crc kubenswrapper[4811]: I0219 10:25:26.040998 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a6f078c-1170-4671-bf42-ce21332d2a3c-config\") pod \"dnsmasq-dns-78dd6ddcc-tb8bf\" (UID: \"1a6f078c-1170-4671-bf42-ce21332d2a3c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-tb8bf" Feb 19 10:25:26 crc kubenswrapper[4811]: I0219 10:25:26.041069 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a6f078c-1170-4671-bf42-ce21332d2a3c-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-tb8bf\" (UID: \"1a6f078c-1170-4671-bf42-ce21332d2a3c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-tb8bf" Feb 19 10:25:26 crc kubenswrapper[4811]: I0219 10:25:26.041103 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2pqs\" (UniqueName: \"kubernetes.io/projected/1a6f078c-1170-4671-bf42-ce21332d2a3c-kube-api-access-q2pqs\") pod \"dnsmasq-dns-78dd6ddcc-tb8bf\" (UID: \"1a6f078c-1170-4671-bf42-ce21332d2a3c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-tb8bf" Feb 19 10:25:26 crc kubenswrapper[4811]: I0219 10:25:26.042723 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a6f078c-1170-4671-bf42-ce21332d2a3c-config\") pod \"dnsmasq-dns-78dd6ddcc-tb8bf\" (UID: \"1a6f078c-1170-4671-bf42-ce21332d2a3c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-tb8bf" Feb 19 10:25:26 crc kubenswrapper[4811]: I0219 10:25:26.043622 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a6f078c-1170-4671-bf42-ce21332d2a3c-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-tb8bf\" (UID: \"1a6f078c-1170-4671-bf42-ce21332d2a3c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-tb8bf" Feb 19 10:25:26 crc kubenswrapper[4811]: I0219 10:25:26.060356 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2pqs\" (UniqueName: \"kubernetes.io/projected/1a6f078c-1170-4671-bf42-ce21332d2a3c-kube-api-access-q2pqs\") pod \"dnsmasq-dns-78dd6ddcc-tb8bf\" (UID: \"1a6f078c-1170-4671-bf42-ce21332d2a3c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-tb8bf" Feb 19 10:25:26 crc kubenswrapper[4811]: I0219 10:25:26.103502 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-rrkfr" Feb 19 10:25:26 crc kubenswrapper[4811]: I0219 10:25:26.174154 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-tb8bf" Feb 19 10:25:26 crc kubenswrapper[4811]: I0219 10:25:26.339792 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rrkfr"] Feb 19 10:25:26 crc kubenswrapper[4811]: W0219 10:25:26.344987 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0adfb997_3e8d_45cf_bf14_bc6f3ce105b9.slice/crio-4b3f61009148f1b25b22e7a31f7526ecc91e4651c0806fb5fd2691a87293e957 WatchSource:0}: Error finding container 4b3f61009148f1b25b22e7a31f7526ecc91e4651c0806fb5fd2691a87293e957: Status 404 returned error can't find the container with id 4b3f61009148f1b25b22e7a31f7526ecc91e4651c0806fb5fd2691a87293e957 Feb 19 10:25:26 crc kubenswrapper[4811]: W0219 10:25:26.429135 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a6f078c_1170_4671_bf42_ce21332d2a3c.slice/crio-6f277ae5b897f0391aa89e3ac2d4488a82232574a7db89e1a01a3d73dbf2035a WatchSource:0}: Error finding container 6f277ae5b897f0391aa89e3ac2d4488a82232574a7db89e1a01a3d73dbf2035a: Status 404 returned error can't find the container with id 6f277ae5b897f0391aa89e3ac2d4488a82232574a7db89e1a01a3d73dbf2035a Feb 19 10:25:26 crc kubenswrapper[4811]: I0219 10:25:26.431521 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-tb8bf"] Feb 19 10:25:26 crc kubenswrapper[4811]: I0219 10:25:26.523154 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-tb8bf" event={"ID":"1a6f078c-1170-4671-bf42-ce21332d2a3c","Type":"ContainerStarted","Data":"6f277ae5b897f0391aa89e3ac2d4488a82232574a7db89e1a01a3d73dbf2035a"} Feb 19 10:25:26 crc kubenswrapper[4811]: I0219 10:25:26.524418 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-rrkfr" event={"ID":"0adfb997-3e8d-45cf-bf14-bc6f3ce105b9","Type":"ContainerStarted","Data":"4b3f61009148f1b25b22e7a31f7526ecc91e4651c0806fb5fd2691a87293e957"} Feb 19 10:25:26 crc kubenswrapper[4811]: I0219 10:25:26.787728 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:25:26 crc kubenswrapper[4811]: I0219 10:25:26.788072 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:25:28 crc kubenswrapper[4811]: I0219 10:25:28.623522 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rrkfr"] Feb 19 10:25:28 crc kubenswrapper[4811]: I0219 10:25:28.711608 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-qpggf"] Feb 19 10:25:28 crc kubenswrapper[4811]: I0219 10:25:28.715733 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-qpggf" Feb 19 10:25:28 crc kubenswrapper[4811]: I0219 10:25:28.745047 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-qpggf"] Feb 19 10:25:28 crc kubenswrapper[4811]: I0219 10:25:28.800176 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q54rv\" (UniqueName: \"kubernetes.io/projected/96d7a7f6-ff94-4361-b748-52474061017e-kube-api-access-q54rv\") pod \"dnsmasq-dns-666b6646f7-qpggf\" (UID: \"96d7a7f6-ff94-4361-b748-52474061017e\") " pod="openstack/dnsmasq-dns-666b6646f7-qpggf" Feb 19 10:25:28 crc kubenswrapper[4811]: I0219 10:25:28.800236 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96d7a7f6-ff94-4361-b748-52474061017e-dns-svc\") pod \"dnsmasq-dns-666b6646f7-qpggf\" (UID: \"96d7a7f6-ff94-4361-b748-52474061017e\") " pod="openstack/dnsmasq-dns-666b6646f7-qpggf" Feb 19 10:25:28 crc kubenswrapper[4811]: I0219 10:25:28.800293 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96d7a7f6-ff94-4361-b748-52474061017e-config\") pod \"dnsmasq-dns-666b6646f7-qpggf\" (UID: \"96d7a7f6-ff94-4361-b748-52474061017e\") " pod="openstack/dnsmasq-dns-666b6646f7-qpggf" Feb 19 10:25:28 crc kubenswrapper[4811]: I0219 10:25:28.904176 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q54rv\" (UniqueName: \"kubernetes.io/projected/96d7a7f6-ff94-4361-b748-52474061017e-kube-api-access-q54rv\") pod \"dnsmasq-dns-666b6646f7-qpggf\" (UID: \"96d7a7f6-ff94-4361-b748-52474061017e\") " pod="openstack/dnsmasq-dns-666b6646f7-qpggf" Feb 19 10:25:28 crc kubenswrapper[4811]: I0219 10:25:28.904247 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96d7a7f6-ff94-4361-b748-52474061017e-dns-svc\") pod \"dnsmasq-dns-666b6646f7-qpggf\" (UID: \"96d7a7f6-ff94-4361-b748-52474061017e\") " pod="openstack/dnsmasq-dns-666b6646f7-qpggf" Feb 19 10:25:28 crc kubenswrapper[4811]: I0219 10:25:28.904306 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96d7a7f6-ff94-4361-b748-52474061017e-config\") pod \"dnsmasq-dns-666b6646f7-qpggf\" (UID: \"96d7a7f6-ff94-4361-b748-52474061017e\") " pod="openstack/dnsmasq-dns-666b6646f7-qpggf" Feb 19 10:25:28 crc kubenswrapper[4811]: I0219 10:25:28.905334 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96d7a7f6-ff94-4361-b748-52474061017e-config\") pod \"dnsmasq-dns-666b6646f7-qpggf\" (UID: \"96d7a7f6-ff94-4361-b748-52474061017e\") " pod="openstack/dnsmasq-dns-666b6646f7-qpggf" Feb 19 10:25:28 crc kubenswrapper[4811]: I0219 10:25:28.906200 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96d7a7f6-ff94-4361-b748-52474061017e-dns-svc\") pod \"dnsmasq-dns-666b6646f7-qpggf\" (UID: \"96d7a7f6-ff94-4361-b748-52474061017e\") " pod="openstack/dnsmasq-dns-666b6646f7-qpggf" Feb 19 10:25:28 crc kubenswrapper[4811]: I0219 10:25:28.946679 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q54rv\" (UniqueName: \"kubernetes.io/projected/96d7a7f6-ff94-4361-b748-52474061017e-kube-api-access-q54rv\") pod \"dnsmasq-dns-666b6646f7-qpggf\" (UID: \"96d7a7f6-ff94-4361-b748-52474061017e\") " pod="openstack/dnsmasq-dns-666b6646f7-qpggf" Feb 19 10:25:29 crc kubenswrapper[4811]: I0219 10:25:29.037157 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-qpggf" Feb 19 10:25:29 crc kubenswrapper[4811]: I0219 10:25:29.076333 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-tb8bf"] Feb 19 10:25:29 crc kubenswrapper[4811]: I0219 10:25:29.095196 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-b9vx7"] Feb 19 10:25:29 crc kubenswrapper[4811]: I0219 10:25:29.096660 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-b9vx7" Feb 19 10:25:29 crc kubenswrapper[4811]: I0219 10:25:29.116324 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-b9vx7"] Feb 19 10:25:29 crc kubenswrapper[4811]: I0219 10:25:29.209364 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56394562-c333-4096-b46f-9f8b0313c765-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-b9vx7\" (UID: \"56394562-c333-4096-b46f-9f8b0313c765\") " pod="openstack/dnsmasq-dns-57d769cc4f-b9vx7" Feb 19 10:25:29 crc kubenswrapper[4811]: I0219 10:25:29.209425 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56394562-c333-4096-b46f-9f8b0313c765-config\") pod \"dnsmasq-dns-57d769cc4f-b9vx7\" (UID: \"56394562-c333-4096-b46f-9f8b0313c765\") " pod="openstack/dnsmasq-dns-57d769cc4f-b9vx7" Feb 19 10:25:29 crc kubenswrapper[4811]: I0219 10:25:29.209827 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk7qf\" (UniqueName: \"kubernetes.io/projected/56394562-c333-4096-b46f-9f8b0313c765-kube-api-access-pk7qf\") pod \"dnsmasq-dns-57d769cc4f-b9vx7\" (UID: \"56394562-c333-4096-b46f-9f8b0313c765\") " pod="openstack/dnsmasq-dns-57d769cc4f-b9vx7" Feb 19 10:25:29 crc kubenswrapper[4811]: I0219 10:25:29.311195 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56394562-c333-4096-b46f-9f8b0313c765-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-b9vx7\" (UID: \"56394562-c333-4096-b46f-9f8b0313c765\") " pod="openstack/dnsmasq-dns-57d769cc4f-b9vx7" Feb 19 10:25:29 crc kubenswrapper[4811]: I0219 10:25:29.311266 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56394562-c333-4096-b46f-9f8b0313c765-config\") pod \"dnsmasq-dns-57d769cc4f-b9vx7\" (UID: \"56394562-c333-4096-b46f-9f8b0313c765\") " pod="openstack/dnsmasq-dns-57d769cc4f-b9vx7" Feb 19 10:25:29 crc kubenswrapper[4811]: I0219 10:25:29.311339 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pk7qf\" (UniqueName: \"kubernetes.io/projected/56394562-c333-4096-b46f-9f8b0313c765-kube-api-access-pk7qf\") pod \"dnsmasq-dns-57d769cc4f-b9vx7\" (UID: \"56394562-c333-4096-b46f-9f8b0313c765\") " pod="openstack/dnsmasq-dns-57d769cc4f-b9vx7" Feb 19 10:25:29 crc kubenswrapper[4811]: I0219 10:25:29.312338 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56394562-c333-4096-b46f-9f8b0313c765-config\") pod \"dnsmasq-dns-57d769cc4f-b9vx7\" (UID: \"56394562-c333-4096-b46f-9f8b0313c765\") " pod="openstack/dnsmasq-dns-57d769cc4f-b9vx7" Feb 19 10:25:29 crc kubenswrapper[4811]: I0219 10:25:29.312369 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56394562-c333-4096-b46f-9f8b0313c765-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-b9vx7\" (UID: \"56394562-c333-4096-b46f-9f8b0313c765\") " pod="openstack/dnsmasq-dns-57d769cc4f-b9vx7" Feb 19 10:25:29 crc kubenswrapper[4811]: I0219 10:25:29.330292 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pk7qf\" (UniqueName: \"kubernetes.io/projected/56394562-c333-4096-b46f-9f8b0313c765-kube-api-access-pk7qf\") pod \"dnsmasq-dns-57d769cc4f-b9vx7\" (UID: \"56394562-c333-4096-b46f-9f8b0313c765\") " pod="openstack/dnsmasq-dns-57d769cc4f-b9vx7" Feb 19 10:25:29 crc kubenswrapper[4811]: I0219 10:25:29.422633 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-b9vx7" Feb 19 10:25:29 crc kubenswrapper[4811]: I0219 10:25:29.880052 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 10:25:29 crc kubenswrapper[4811]: I0219 10:25:29.881896 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 10:25:29 crc kubenswrapper[4811]: I0219 10:25:29.885737 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 19 10:25:29 crc kubenswrapper[4811]: I0219 10:25:29.885919 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 19 10:25:29 crc kubenswrapper[4811]: I0219 10:25:29.885991 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 19 10:25:29 crc kubenswrapper[4811]: I0219 10:25:29.885932 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-thsf7" Feb 19 10:25:29 crc kubenswrapper[4811]: I0219 10:25:29.886097 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 19 10:25:29 crc kubenswrapper[4811]: I0219 10:25:29.886110 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 19 10:25:29 crc kubenswrapper[4811]: I0219 10:25:29.886121 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 19 10:25:29 crc kubenswrapper[4811]: I0219 10:25:29.896513 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 10:25:29 crc kubenswrapper[4811]: I0219 10:25:29.919244 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f263c1a8-dfa3-4383-849c-bde314ae5503-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " pod="openstack/rabbitmq-server-0" Feb 19 10:25:29 crc kubenswrapper[4811]: I0219 10:25:29.919303 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f263c1a8-dfa3-4383-849c-bde314ae5503-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " pod="openstack/rabbitmq-server-0" Feb 19 10:25:29 crc kubenswrapper[4811]: I0219 10:25:29.919327 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f263c1a8-dfa3-4383-849c-bde314ae5503-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " pod="openstack/rabbitmq-server-0" Feb 19 10:25:29 crc kubenswrapper[4811]: I0219 10:25:29.919350 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f263c1a8-dfa3-4383-849c-bde314ae5503-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " pod="openstack/rabbitmq-server-0" Feb 19 10:25:29 crc kubenswrapper[4811]: I0219 10:25:29.919371 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kphf\" (UniqueName: \"kubernetes.io/projected/f263c1a8-dfa3-4383-849c-bde314ae5503-kube-api-access-5kphf\") pod \"rabbitmq-server-0\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " pod="openstack/rabbitmq-server-0" Feb 19 10:25:29 crc kubenswrapper[4811]: I0219 10:25:29.919387 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f263c1a8-dfa3-4383-849c-bde314ae5503-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " pod="openstack/rabbitmq-server-0" Feb 19 10:25:29 crc kubenswrapper[4811]: I0219 10:25:29.919412 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " pod="openstack/rabbitmq-server-0" Feb 19 10:25:29 crc kubenswrapper[4811]: I0219 10:25:29.919430 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f263c1a8-dfa3-4383-849c-bde314ae5503-config-data\") pod \"rabbitmq-server-0\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " pod="openstack/rabbitmq-server-0" Feb 19 10:25:29 crc kubenswrapper[4811]: I0219 10:25:29.919480 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f263c1a8-dfa3-4383-849c-bde314ae5503-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " pod="openstack/rabbitmq-server-0" Feb 19 10:25:29 crc kubenswrapper[4811]: I0219 10:25:29.919517 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f263c1a8-dfa3-4383-849c-bde314ae5503-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " pod="openstack/rabbitmq-server-0" Feb 19 10:25:29 crc kubenswrapper[4811]: I0219 10:25:29.919539 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f263c1a8-dfa3-4383-849c-bde314ae5503-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " pod="openstack/rabbitmq-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.021544 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f263c1a8-dfa3-4383-849c-bde314ae5503-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " pod="openstack/rabbitmq-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.021605 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f263c1a8-dfa3-4383-849c-bde314ae5503-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " pod="openstack/rabbitmq-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.021638 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f263c1a8-dfa3-4383-849c-bde314ae5503-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " pod="openstack/rabbitmq-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.021665 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kphf\" (UniqueName: \"kubernetes.io/projected/f263c1a8-dfa3-4383-849c-bde314ae5503-kube-api-access-5kphf\") pod \"rabbitmq-server-0\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " pod="openstack/rabbitmq-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.021688 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f263c1a8-dfa3-4383-849c-bde314ae5503-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " pod="openstack/rabbitmq-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.021723 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " pod="openstack/rabbitmq-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.021747 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f263c1a8-dfa3-4383-849c-bde314ae5503-config-data\") pod \"rabbitmq-server-0\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " pod="openstack/rabbitmq-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.021778 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f263c1a8-dfa3-4383-849c-bde314ae5503-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " pod="openstack/rabbitmq-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.021822 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f263c1a8-dfa3-4383-849c-bde314ae5503-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " pod="openstack/rabbitmq-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.021847 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f263c1a8-dfa3-4383-849c-bde314ae5503-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " pod="openstack/rabbitmq-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.021893 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f263c1a8-dfa3-4383-849c-bde314ae5503-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " pod="openstack/rabbitmq-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.022001 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f263c1a8-dfa3-4383-849c-bde314ae5503-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " pod="openstack/rabbitmq-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.022434 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.024187 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f263c1a8-dfa3-4383-849c-bde314ae5503-config-data\") pod \"rabbitmq-server-0\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " pod="openstack/rabbitmq-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.024361 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f263c1a8-dfa3-4383-849c-bde314ae5503-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " pod="openstack/rabbitmq-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.024421 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f263c1a8-dfa3-4383-849c-bde314ae5503-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " pod="openstack/rabbitmq-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.025315 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f263c1a8-dfa3-4383-849c-bde314ae5503-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " pod="openstack/rabbitmq-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.029223 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f263c1a8-dfa3-4383-849c-bde314ae5503-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " pod="openstack/rabbitmq-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.029701 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f263c1a8-dfa3-4383-849c-bde314ae5503-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " pod="openstack/rabbitmq-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.043251 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f263c1a8-dfa3-4383-849c-bde314ae5503-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " pod="openstack/rabbitmq-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.045999 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kphf\" (UniqueName: \"kubernetes.io/projected/f263c1a8-dfa3-4383-849c-bde314ae5503-kube-api-access-5kphf\") pod \"rabbitmq-server-0\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " pod="openstack/rabbitmq-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.048772 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f263c1a8-dfa3-4383-849c-bde314ae5503-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " pod="openstack/rabbitmq-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.049543 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " pod="openstack/rabbitmq-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.207373 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.209223 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.224063 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.224302 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.224433 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-vv7vf" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.224569 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.224691 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.224846 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.224953 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.226703 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.233147 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.330377 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d7378877-24bd-40fc-89c8-6bc6115040a2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.330427 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.330472 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d7378877-24bd-40fc-89c8-6bc6115040a2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.330507 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d7378877-24bd-40fc-89c8-6bc6115040a2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.330530 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d7378877-24bd-40fc-89c8-6bc6115040a2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.330563 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wptsg\" (UniqueName: \"kubernetes.io/projected/d7378877-24bd-40fc-89c8-6bc6115040a2-kube-api-access-wptsg\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.330623 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d7378877-24bd-40fc-89c8-6bc6115040a2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.330643 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d7378877-24bd-40fc-89c8-6bc6115040a2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.330659 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d7378877-24bd-40fc-89c8-6bc6115040a2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.330690 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7378877-24bd-40fc-89c8-6bc6115040a2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.330711 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d7378877-24bd-40fc-89c8-6bc6115040a2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.432101 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d7378877-24bd-40fc-89c8-6bc6115040a2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.432173 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7378877-24bd-40fc-89c8-6bc6115040a2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.432194 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d7378877-24bd-40fc-89c8-6bc6115040a2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.432225 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.432246 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d7378877-24bd-40fc-89c8-6bc6115040a2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.432262 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d7378877-24bd-40fc-89c8-6bc6115040a2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.432285 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d7378877-24bd-40fc-89c8-6bc6115040a2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.432300 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d7378877-24bd-40fc-89c8-6bc6115040a2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.432325 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wptsg\" (UniqueName: \"kubernetes.io/projected/d7378877-24bd-40fc-89c8-6bc6115040a2-kube-api-access-wptsg\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.432375 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d7378877-24bd-40fc-89c8-6bc6115040a2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.432389 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d7378877-24bd-40fc-89c8-6bc6115040a2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.433053 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d7378877-24bd-40fc-89c8-6bc6115040a2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.433357 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d7378877-24bd-40fc-89c8-6bc6115040a2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.433531 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.433778 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d7378877-24bd-40fc-89c8-6bc6115040a2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.433840 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7378877-24bd-40fc-89c8-6bc6115040a2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.435022 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d7378877-24bd-40fc-89c8-6bc6115040a2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.436025 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d7378877-24bd-40fc-89c8-6bc6115040a2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.436557 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d7378877-24bd-40fc-89c8-6bc6115040a2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.439564 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d7378877-24bd-40fc-89c8-6bc6115040a2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.439873 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d7378877-24bd-40fc-89c8-6bc6115040a2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.465644 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.466770 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wptsg\" (UniqueName: \"kubernetes.io/projected/d7378877-24bd-40fc-89c8-6bc6115040a2-kube-api-access-wptsg\") pod \"rabbitmq-cell1-server-0\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:25:30 crc kubenswrapper[4811]: I0219 10:25:30.547654 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:25:31 crc kubenswrapper[4811]: I0219 10:25:31.537945 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 19 10:25:31 crc kubenswrapper[4811]: I0219 10:25:31.541927 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 19 10:25:31 crc kubenswrapper[4811]: I0219 10:25:31.551151 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 19 10:25:31 crc kubenswrapper[4811]: I0219 10:25:31.551205 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 19 10:25:31 crc kubenswrapper[4811]: I0219 10:25:31.551370 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 19 10:25:31 crc kubenswrapper[4811]: I0219 10:25:31.551954 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-7tqgj" Feb 19 10:25:31 crc kubenswrapper[4811]: I0219 10:25:31.552158 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 10:25:31 crc kubenswrapper[4811]: I0219 10:25:31.572248 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 19 10:25:31 crc kubenswrapper[4811]: I0219 10:25:31.662333 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6-kolla-config\") pod \"openstack-galera-0\" (UID: \"ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6\") " pod="openstack/openstack-galera-0" Feb 19 10:25:31 crc kubenswrapper[4811]: I0219 10:25:31.662486 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw9sn\" (UniqueName: \"kubernetes.io/projected/ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6-kube-api-access-lw9sn\") pod \"openstack-galera-0\" (UID: \"ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6\") " pod="openstack/openstack-galera-0" Feb 19 10:25:31 crc kubenswrapper[4811]: I0219 10:25:31.662588 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6-config-data-default\") pod \"openstack-galera-0\" (UID: \"ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6\") " pod="openstack/openstack-galera-0" Feb 19 10:25:31 crc kubenswrapper[4811]: I0219 10:25:31.662640 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6\") " pod="openstack/openstack-galera-0" Feb 19 10:25:31 crc kubenswrapper[4811]: I0219 10:25:31.662788 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6\") " pod="openstack/openstack-galera-0" Feb 19 10:25:31 crc kubenswrapper[4811]: I0219 10:25:31.662894 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6\") " pod="openstack/openstack-galera-0" Feb 19 10:25:31 crc kubenswrapper[4811]: I0219 10:25:31.662949 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6\") " pod="openstack/openstack-galera-0" Feb 19 10:25:31 crc kubenswrapper[4811]: I0219 10:25:31.663099 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6\") " pod="openstack/openstack-galera-0" Feb 19 10:25:31 crc kubenswrapper[4811]: I0219 10:25:31.764492 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6\") " pod="openstack/openstack-galera-0" Feb 19 10:25:31 crc kubenswrapper[4811]: I0219 10:25:31.764564 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6\") " pod="openstack/openstack-galera-0" Feb 19 10:25:31 crc kubenswrapper[4811]: I0219 10:25:31.764586 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6\") " pod="openstack/openstack-galera-0" Feb 19 10:25:31 crc kubenswrapper[4811]: I0219 10:25:31.764624 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6\") " pod="openstack/openstack-galera-0" Feb 19 10:25:31 crc kubenswrapper[4811]: I0219 10:25:31.764679 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6-kolla-config\") pod \"openstack-galera-0\" (UID: \"ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6\") " pod="openstack/openstack-galera-0" Feb 19 10:25:31 crc kubenswrapper[4811]: I0219 10:25:31.764710 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6-config-data-default\") pod \"openstack-galera-0\" (UID: \"ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6\") " pod="openstack/openstack-galera-0" Feb 19 10:25:31 crc kubenswrapper[4811]: I0219 10:25:31.764730 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6\") " pod="openstack/openstack-galera-0" Feb 19 10:25:31 crc kubenswrapper[4811]: I0219 10:25:31.764750 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw9sn\" (UniqueName: \"kubernetes.io/projected/ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6-kube-api-access-lw9sn\") pod \"openstack-galera-0\" (UID: \"ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6\") " pod="openstack/openstack-galera-0" Feb 19 10:25:31 crc kubenswrapper[4811]: I0219 10:25:31.765114 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-galera-0" Feb 19 10:25:31 crc kubenswrapper[4811]: I0219 10:25:31.765669 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6\") " pod="openstack/openstack-galera-0" Feb 19 10:25:31 crc kubenswrapper[4811]: I0219 10:25:31.765953 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6-kolla-config\") pod \"openstack-galera-0\" (UID: \"ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6\") " pod="openstack/openstack-galera-0" Feb 19 10:25:31 crc kubenswrapper[4811]: I0219 10:25:31.766353 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6-config-data-default\") pod \"openstack-galera-0\" (UID: \"ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6\") " pod="openstack/openstack-galera-0" Feb 19 10:25:31 crc kubenswrapper[4811]: I0219 10:25:31.767404 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6\") " pod="openstack/openstack-galera-0" Feb 19 10:25:31 crc kubenswrapper[4811]: I0219 10:25:31.781206 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6\") " pod="openstack/openstack-galera-0" Feb 19 10:25:31 crc kubenswrapper[4811]: I0219 10:25:31.782238 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6\") " pod="openstack/openstack-galera-0" Feb 19 10:25:31 crc kubenswrapper[4811]: I0219 10:25:31.783602 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6\") " pod="openstack/openstack-galera-0" Feb 19 10:25:31 crc kubenswrapper[4811]: I0219 10:25:31.783873 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw9sn\" (UniqueName: \"kubernetes.io/projected/ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6-kube-api-access-lw9sn\") pod \"openstack-galera-0\" (UID: \"ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6\") " pod="openstack/openstack-galera-0" Feb 19 10:25:31 crc kubenswrapper[4811]: I0219 10:25:31.866703 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 19 10:25:32 crc kubenswrapper[4811]: I0219 10:25:32.717903 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 10:25:32 crc kubenswrapper[4811]: I0219 10:25:32.720085 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 19 10:25:32 crc kubenswrapper[4811]: I0219 10:25:32.721964 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 19 10:25:32 crc kubenswrapper[4811]: I0219 10:25:32.722521 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-jspv4" Feb 19 10:25:32 crc kubenswrapper[4811]: I0219 10:25:32.723634 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 19 10:25:32 crc kubenswrapper[4811]: I0219 10:25:32.727942 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 10:25:32 crc kubenswrapper[4811]: I0219 10:25:32.730390 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 19 10:25:32 crc kubenswrapper[4811]: I0219 10:25:32.883900 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/354e40bc-0a2b-4654-873c-30b6293d6192-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"354e40bc-0a2b-4654-873c-30b6293d6192\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:25:32 crc kubenswrapper[4811]: I0219 10:25:32.883977 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/354e40bc-0a2b-4654-873c-30b6293d6192-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"354e40bc-0a2b-4654-873c-30b6293d6192\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:25:32 crc kubenswrapper[4811]: I0219 10:25:32.884073 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgm9v\" (UniqueName: \"kubernetes.io/projected/354e40bc-0a2b-4654-873c-30b6293d6192-kube-api-access-sgm9v\") pod \"openstack-cell1-galera-0\" (UID: \"354e40bc-0a2b-4654-873c-30b6293d6192\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:25:32 crc kubenswrapper[4811]: I0219 10:25:32.884104 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/354e40bc-0a2b-4654-873c-30b6293d6192-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"354e40bc-0a2b-4654-873c-30b6293d6192\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:25:32 crc kubenswrapper[4811]: I0219 10:25:32.884128 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/354e40bc-0a2b-4654-873c-30b6293d6192-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"354e40bc-0a2b-4654-873c-30b6293d6192\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:25:32 crc kubenswrapper[4811]: I0219 10:25:32.884193 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"354e40bc-0a2b-4654-873c-30b6293d6192\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:25:32 crc kubenswrapper[4811]: I0219 10:25:32.884235 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/354e40bc-0a2b-4654-873c-30b6293d6192-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"354e40bc-0a2b-4654-873c-30b6293d6192\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:25:32 crc kubenswrapper[4811]: I0219 10:25:32.884259 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/354e40bc-0a2b-4654-873c-30b6293d6192-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"354e40bc-0a2b-4654-873c-30b6293d6192\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:25:32 crc kubenswrapper[4811]: I0219 10:25:32.986687 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/354e40bc-0a2b-4654-873c-30b6293d6192-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"354e40bc-0a2b-4654-873c-30b6293d6192\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:25:32 crc kubenswrapper[4811]: I0219 10:25:32.986783 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/354e40bc-0a2b-4654-873c-30b6293d6192-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"354e40bc-0a2b-4654-873c-30b6293d6192\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:25:32 crc kubenswrapper[4811]: I0219 10:25:32.986918 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/354e40bc-0a2b-4654-873c-30b6293d6192-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"354e40bc-0a2b-4654-873c-30b6293d6192\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:25:32 crc kubenswrapper[4811]: I0219 10:25:32.986965 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/354e40bc-0a2b-4654-873c-30b6293d6192-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"354e40bc-0a2b-4654-873c-30b6293d6192\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:25:32 crc kubenswrapper[4811]: I0219 10:25:32.987043 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgm9v\" (UniqueName: \"kubernetes.io/projected/354e40bc-0a2b-4654-873c-30b6293d6192-kube-api-access-sgm9v\") pod \"openstack-cell1-galera-0\" (UID: \"354e40bc-0a2b-4654-873c-30b6293d6192\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:25:32 crc kubenswrapper[4811]: I0219 10:25:32.987107 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/354e40bc-0a2b-4654-873c-30b6293d6192-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"354e40bc-0a2b-4654-873c-30b6293d6192\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:25:32 crc kubenswrapper[4811]: I0219 10:25:32.987135 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/354e40bc-0a2b-4654-873c-30b6293d6192-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"354e40bc-0a2b-4654-873c-30b6293d6192\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:25:32 crc kubenswrapper[4811]: I0219 10:25:32.987202 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"354e40bc-0a2b-4654-873c-30b6293d6192\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:25:32 crc kubenswrapper[4811]: I0219 10:25:32.987539 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/354e40bc-0a2b-4654-873c-30b6293d6192-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"354e40bc-0a2b-4654-873c-30b6293d6192\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:25:32 crc kubenswrapper[4811]: I0219 10:25:32.987788 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"354e40bc-0a2b-4654-873c-30b6293d6192\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-cell1-galera-0" Feb 19 10:25:32 crc kubenswrapper[4811]: I0219 10:25:32.988314 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/354e40bc-0a2b-4654-873c-30b6293d6192-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"354e40bc-0a2b-4654-873c-30b6293d6192\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:25:32 crc kubenswrapper[4811]: I0219 10:25:32.988479 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/354e40bc-0a2b-4654-873c-30b6293d6192-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"354e40bc-0a2b-4654-873c-30b6293d6192\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:25:32 crc kubenswrapper[4811]: I0219 10:25:32.992307 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/354e40bc-0a2b-4654-873c-30b6293d6192-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"354e40bc-0a2b-4654-873c-30b6293d6192\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:25:32 crc kubenswrapper[4811]: I0219 10:25:32.992536 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/354e40bc-0a2b-4654-873c-30b6293d6192-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"354e40bc-0a2b-4654-873c-30b6293d6192\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:25:32 crc kubenswrapper[4811]: I0219 10:25:32.996991 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/354e40bc-0a2b-4654-873c-30b6293d6192-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"354e40bc-0a2b-4654-873c-30b6293d6192\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:25:33 crc kubenswrapper[4811]: I0219 10:25:33.010203 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgm9v\" (UniqueName: \"kubernetes.io/projected/354e40bc-0a2b-4654-873c-30b6293d6192-kube-api-access-sgm9v\") pod \"openstack-cell1-galera-0\" (UID: \"354e40bc-0a2b-4654-873c-30b6293d6192\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:25:33 crc kubenswrapper[4811]: I0219 10:25:33.022201 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"354e40bc-0a2b-4654-873c-30b6293d6192\") " pod="openstack/openstack-cell1-galera-0" Feb 19 10:25:33 crc kubenswrapper[4811]: I0219 10:25:33.029260 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 19 10:25:33 crc kubenswrapper[4811]: I0219 10:25:33.030301 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 19 10:25:33 crc kubenswrapper[4811]: I0219 10:25:33.035613 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-xdtmf" Feb 19 10:25:33 crc kubenswrapper[4811]: I0219 10:25:33.036734 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 19 10:25:33 crc kubenswrapper[4811]: I0219 10:25:33.037027 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 19 10:25:33 crc kubenswrapper[4811]: I0219 10:25:33.043698 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 19 10:25:33 crc kubenswrapper[4811]: I0219 10:25:33.046599 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 19 10:25:33 crc kubenswrapper[4811]: I0219 10:25:33.194087 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c2ff968e-6341-43f8-bf15-21009d76f661-config-data\") pod \"memcached-0\" (UID: \"c2ff968e-6341-43f8-bf15-21009d76f661\") " pod="openstack/memcached-0" Feb 19 10:25:33 crc kubenswrapper[4811]: I0219 10:25:33.194146 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2ff968e-6341-43f8-bf15-21009d76f661-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c2ff968e-6341-43f8-bf15-21009d76f661\") " pod="openstack/memcached-0" Feb 19 10:25:33 crc kubenswrapper[4811]: I0219 10:25:33.194743 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c2ff968e-6341-43f8-bf15-21009d76f661-kolla-config\") pod \"memcached-0\" (UID: \"c2ff968e-6341-43f8-bf15-21009d76f661\") " pod="openstack/memcached-0" Feb 19 10:25:33 crc kubenswrapper[4811]: I0219 10:25:33.194806 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2ff968e-6341-43f8-bf15-21009d76f661-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c2ff968e-6341-43f8-bf15-21009d76f661\") " pod="openstack/memcached-0" Feb 19 10:25:33 crc kubenswrapper[4811]: I0219 10:25:33.195123 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd8fs\" (UniqueName: \"kubernetes.io/projected/c2ff968e-6341-43f8-bf15-21009d76f661-kube-api-access-cd8fs\") pod \"memcached-0\" (UID: \"c2ff968e-6341-43f8-bf15-21009d76f661\") " pod="openstack/memcached-0" Feb 19 10:25:33 crc kubenswrapper[4811]: I0219 10:25:33.297513 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c2ff968e-6341-43f8-bf15-21009d76f661-kolla-config\") pod \"memcached-0\" (UID: \"c2ff968e-6341-43f8-bf15-21009d76f661\") " pod="openstack/memcached-0" Feb 19 10:25:33 crc kubenswrapper[4811]: I0219 10:25:33.298296 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2ff968e-6341-43f8-bf15-21009d76f661-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c2ff968e-6341-43f8-bf15-21009d76f661\") " pod="openstack/memcached-0" Feb 19 10:25:33 crc kubenswrapper[4811]: I0219 10:25:33.298493 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd8fs\" (UniqueName: \"kubernetes.io/projected/c2ff968e-6341-43f8-bf15-21009d76f661-kube-api-access-cd8fs\") pod \"memcached-0\" (UID: \"c2ff968e-6341-43f8-bf15-21009d76f661\") " pod="openstack/memcached-0" Feb 19 10:25:33 crc kubenswrapper[4811]: I0219 10:25:33.298654 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c2ff968e-6341-43f8-bf15-21009d76f661-kolla-config\") pod \"memcached-0\" (UID: \"c2ff968e-6341-43f8-bf15-21009d76f661\") " pod="openstack/memcached-0" Feb 19 10:25:33 crc kubenswrapper[4811]: I0219 10:25:33.299367 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c2ff968e-6341-43f8-bf15-21009d76f661-config-data\") pod \"memcached-0\" (UID: \"c2ff968e-6341-43f8-bf15-21009d76f661\") " pod="openstack/memcached-0" Feb 19 10:25:33 crc kubenswrapper[4811]: I0219 10:25:33.300511 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2ff968e-6341-43f8-bf15-21009d76f661-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c2ff968e-6341-43f8-bf15-21009d76f661\") " pod="openstack/memcached-0" Feb 19 10:25:33 crc kubenswrapper[4811]: I0219 10:25:33.300426 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c2ff968e-6341-43f8-bf15-21009d76f661-config-data\") pod \"memcached-0\" (UID: \"c2ff968e-6341-43f8-bf15-21009d76f661\") " pod="openstack/memcached-0" Feb 19 10:25:33 crc kubenswrapper[4811]: I0219 10:25:33.314545 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2ff968e-6341-43f8-bf15-21009d76f661-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c2ff968e-6341-43f8-bf15-21009d76f661\") " pod="openstack/memcached-0" Feb 19 10:25:33 crc kubenswrapper[4811]: I0219 10:25:33.317133 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2ff968e-6341-43f8-bf15-21009d76f661-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c2ff968e-6341-43f8-bf15-21009d76f661\") " pod="openstack/memcached-0" Feb 19 10:25:33 crc kubenswrapper[4811]: I0219 10:25:33.319996 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd8fs\" (UniqueName: \"kubernetes.io/projected/c2ff968e-6341-43f8-bf15-21009d76f661-kube-api-access-cd8fs\") pod \"memcached-0\" (UID: \"c2ff968e-6341-43f8-bf15-21009d76f661\") " pod="openstack/memcached-0" Feb 19 10:25:33 crc kubenswrapper[4811]: I0219 10:25:33.397879 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 19 10:25:35 crc kubenswrapper[4811]: I0219 10:25:35.379754 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 10:25:35 crc kubenswrapper[4811]: I0219 10:25:35.381650 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 10:25:35 crc kubenswrapper[4811]: I0219 10:25:35.385343 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-m722p" Feb 19 10:25:35 crc kubenswrapper[4811]: I0219 10:25:35.399720 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 10:25:35 crc kubenswrapper[4811]: I0219 10:25:35.537242 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hghq\" (UniqueName: \"kubernetes.io/projected/dc7e3241-6036-4ef1-9bba-1621a28623a7-kube-api-access-9hghq\") pod \"kube-state-metrics-0\" (UID: \"dc7e3241-6036-4ef1-9bba-1621a28623a7\") " pod="openstack/kube-state-metrics-0" Feb 19 10:25:35 crc kubenswrapper[4811]: I0219 10:25:35.639058 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hghq\" (UniqueName: \"kubernetes.io/projected/dc7e3241-6036-4ef1-9bba-1621a28623a7-kube-api-access-9hghq\") pod \"kube-state-metrics-0\" (UID: \"dc7e3241-6036-4ef1-9bba-1621a28623a7\") " pod="openstack/kube-state-metrics-0" Feb 19 10:25:35 crc kubenswrapper[4811]: I0219 10:25:35.660198 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hghq\" (UniqueName: \"kubernetes.io/projected/dc7e3241-6036-4ef1-9bba-1621a28623a7-kube-api-access-9hghq\") pod \"kube-state-metrics-0\" (UID: \"dc7e3241-6036-4ef1-9bba-1621a28623a7\") " pod="openstack/kube-state-metrics-0" Feb 19 10:25:35 crc kubenswrapper[4811]: I0219 10:25:35.758994 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.220661 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-mvhxm"] Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.222206 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mvhxm" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.225936 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-f4h7r" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.226079 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.227010 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.256549 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mvhxm"] Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.275519 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-bddvk"] Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.277367 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-bddvk" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.283524 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-bddvk"] Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.302772 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/00337a1f-39d0-4583-b35b-a50aeb572f60-var-run-ovn\") pod \"ovn-controller-mvhxm\" (UID: \"00337a1f-39d0-4583-b35b-a50aeb572f60\") " pod="openstack/ovn-controller-mvhxm" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.302854 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00337a1f-39d0-4583-b35b-a50aeb572f60-scripts\") pod \"ovn-controller-mvhxm\" (UID: \"00337a1f-39d0-4583-b35b-a50aeb572f60\") " pod="openstack/ovn-controller-mvhxm" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.302924 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/00337a1f-39d0-4583-b35b-a50aeb572f60-ovn-controller-tls-certs\") pod \"ovn-controller-mvhxm\" (UID: \"00337a1f-39d0-4583-b35b-a50aeb572f60\") " pod="openstack/ovn-controller-mvhxm" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.302948 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/00337a1f-39d0-4583-b35b-a50aeb572f60-var-log-ovn\") pod \"ovn-controller-mvhxm\" (UID: \"00337a1f-39d0-4583-b35b-a50aeb572f60\") " pod="openstack/ovn-controller-mvhxm" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.302967 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/00337a1f-39d0-4583-b35b-a50aeb572f60-var-run\") pod \"ovn-controller-mvhxm\" (UID: \"00337a1f-39d0-4583-b35b-a50aeb572f60\") " pod="openstack/ovn-controller-mvhxm" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.302985 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00337a1f-39d0-4583-b35b-a50aeb572f60-combined-ca-bundle\") pod \"ovn-controller-mvhxm\" (UID: \"00337a1f-39d0-4583-b35b-a50aeb572f60\") " pod="openstack/ovn-controller-mvhxm" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.303004 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llxft\" (UniqueName: \"kubernetes.io/projected/00337a1f-39d0-4583-b35b-a50aeb572f60-kube-api-access-llxft\") pod \"ovn-controller-mvhxm\" (UID: \"00337a1f-39d0-4583-b35b-a50aeb572f60\") " pod="openstack/ovn-controller-mvhxm" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.404706 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a76ec0f0-9b42-4e48-bd3f-68a74161cc4c-scripts\") pod \"ovn-controller-ovs-bddvk\" (UID: \"a76ec0f0-9b42-4e48-bd3f-68a74161cc4c\") " pod="openstack/ovn-controller-ovs-bddvk" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.404761 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a76ec0f0-9b42-4e48-bd3f-68a74161cc4c-etc-ovs\") pod \"ovn-controller-ovs-bddvk\" (UID: \"a76ec0f0-9b42-4e48-bd3f-68a74161cc4c\") " pod="openstack/ovn-controller-ovs-bddvk" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.404792 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a76ec0f0-9b42-4e48-bd3f-68a74161cc4c-var-run\") pod \"ovn-controller-ovs-bddvk\" (UID: \"a76ec0f0-9b42-4e48-bd3f-68a74161cc4c\") " pod="openstack/ovn-controller-ovs-bddvk" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.404827 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a76ec0f0-9b42-4e48-bd3f-68a74161cc4c-var-lib\") pod \"ovn-controller-ovs-bddvk\" (UID: \"a76ec0f0-9b42-4e48-bd3f-68a74161cc4c\") " pod="openstack/ovn-controller-ovs-bddvk" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.404864 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srxrl\" (UniqueName: \"kubernetes.io/projected/a76ec0f0-9b42-4e48-bd3f-68a74161cc4c-kube-api-access-srxrl\") pod \"ovn-controller-ovs-bddvk\" (UID: \"a76ec0f0-9b42-4e48-bd3f-68a74161cc4c\") " pod="openstack/ovn-controller-ovs-bddvk" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.404906 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/00337a1f-39d0-4583-b35b-a50aeb572f60-ovn-controller-tls-certs\") pod \"ovn-controller-mvhxm\" (UID: \"00337a1f-39d0-4583-b35b-a50aeb572f60\") " pod="openstack/ovn-controller-mvhxm" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.404933 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/00337a1f-39d0-4583-b35b-a50aeb572f60-var-log-ovn\") pod \"ovn-controller-mvhxm\" (UID: \"00337a1f-39d0-4583-b35b-a50aeb572f60\") " pod="openstack/ovn-controller-mvhxm" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.404962 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/00337a1f-39d0-4583-b35b-a50aeb572f60-var-run\") pod \"ovn-controller-mvhxm\" (UID: \"00337a1f-39d0-4583-b35b-a50aeb572f60\") " pod="openstack/ovn-controller-mvhxm" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.404983 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00337a1f-39d0-4583-b35b-a50aeb572f60-combined-ca-bundle\") pod \"ovn-controller-mvhxm\" (UID: \"00337a1f-39d0-4583-b35b-a50aeb572f60\") " pod="openstack/ovn-controller-mvhxm" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.405006 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llxft\" (UniqueName: \"kubernetes.io/projected/00337a1f-39d0-4583-b35b-a50aeb572f60-kube-api-access-llxft\") pod \"ovn-controller-mvhxm\" (UID: \"00337a1f-39d0-4583-b35b-a50aeb572f60\") " pod="openstack/ovn-controller-mvhxm" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.405028 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a76ec0f0-9b42-4e48-bd3f-68a74161cc4c-var-log\") pod \"ovn-controller-ovs-bddvk\" (UID: \"a76ec0f0-9b42-4e48-bd3f-68a74161cc4c\") " pod="openstack/ovn-controller-ovs-bddvk" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.405088 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/00337a1f-39d0-4583-b35b-a50aeb572f60-var-run-ovn\") pod \"ovn-controller-mvhxm\" (UID: \"00337a1f-39d0-4583-b35b-a50aeb572f60\") " pod="openstack/ovn-controller-mvhxm" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.405118 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00337a1f-39d0-4583-b35b-a50aeb572f60-scripts\") pod \"ovn-controller-mvhxm\" (UID: \"00337a1f-39d0-4583-b35b-a50aeb572f60\") " pod="openstack/ovn-controller-mvhxm" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.407237 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00337a1f-39d0-4583-b35b-a50aeb572f60-scripts\") pod \"ovn-controller-mvhxm\" (UID: \"00337a1f-39d0-4583-b35b-a50aeb572f60\") " pod="openstack/ovn-controller-mvhxm" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.412846 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/00337a1f-39d0-4583-b35b-a50aeb572f60-var-log-ovn\") pod \"ovn-controller-mvhxm\" (UID: \"00337a1f-39d0-4583-b35b-a50aeb572f60\") " pod="openstack/ovn-controller-mvhxm" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.412884 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/00337a1f-39d0-4583-b35b-a50aeb572f60-var-run-ovn\") pod \"ovn-controller-mvhxm\" (UID: \"00337a1f-39d0-4583-b35b-a50aeb572f60\") " pod="openstack/ovn-controller-mvhxm" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.414140 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00337a1f-39d0-4583-b35b-a50aeb572f60-combined-ca-bundle\") pod \"ovn-controller-mvhxm\" (UID: \"00337a1f-39d0-4583-b35b-a50aeb572f60\") " pod="openstack/ovn-controller-mvhxm" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.414714 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/00337a1f-39d0-4583-b35b-a50aeb572f60-var-run\") pod \"ovn-controller-mvhxm\" (UID: \"00337a1f-39d0-4583-b35b-a50aeb572f60\") " pod="openstack/ovn-controller-mvhxm" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.424591 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llxft\" (UniqueName: \"kubernetes.io/projected/00337a1f-39d0-4583-b35b-a50aeb572f60-kube-api-access-llxft\") pod \"ovn-controller-mvhxm\" (UID: \"00337a1f-39d0-4583-b35b-a50aeb572f60\") " pod="openstack/ovn-controller-mvhxm" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.425594 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/00337a1f-39d0-4583-b35b-a50aeb572f60-ovn-controller-tls-certs\") pod \"ovn-controller-mvhxm\" (UID: \"00337a1f-39d0-4583-b35b-a50aeb572f60\") " pod="openstack/ovn-controller-mvhxm" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.507410 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a76ec0f0-9b42-4e48-bd3f-68a74161cc4c-scripts\") pod \"ovn-controller-ovs-bddvk\" (UID: \"a76ec0f0-9b42-4e48-bd3f-68a74161cc4c\") " pod="openstack/ovn-controller-ovs-bddvk" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.507504 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a76ec0f0-9b42-4e48-bd3f-68a74161cc4c-etc-ovs\") pod \"ovn-controller-ovs-bddvk\" (UID: \"a76ec0f0-9b42-4e48-bd3f-68a74161cc4c\") " pod="openstack/ovn-controller-ovs-bddvk" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.507537 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a76ec0f0-9b42-4e48-bd3f-68a74161cc4c-var-run\") pod \"ovn-controller-ovs-bddvk\" (UID: \"a76ec0f0-9b42-4e48-bd3f-68a74161cc4c\") " pod="openstack/ovn-controller-ovs-bddvk" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.507623 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a76ec0f0-9b42-4e48-bd3f-68a74161cc4c-var-lib\") pod \"ovn-controller-ovs-bddvk\" (UID: \"a76ec0f0-9b42-4e48-bd3f-68a74161cc4c\") " pod="openstack/ovn-controller-ovs-bddvk" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.507668 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srxrl\" (UniqueName: \"kubernetes.io/projected/a76ec0f0-9b42-4e48-bd3f-68a74161cc4c-kube-api-access-srxrl\") pod \"ovn-controller-ovs-bddvk\" (UID: \"a76ec0f0-9b42-4e48-bd3f-68a74161cc4c\") " pod="openstack/ovn-controller-ovs-bddvk" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.507724 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a76ec0f0-9b42-4e48-bd3f-68a74161cc4c-var-log\") pod \"ovn-controller-ovs-bddvk\" (UID: \"a76ec0f0-9b42-4e48-bd3f-68a74161cc4c\") " pod="openstack/ovn-controller-ovs-bddvk" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.508283 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a76ec0f0-9b42-4e48-bd3f-68a74161cc4c-var-log\") pod \"ovn-controller-ovs-bddvk\" (UID: \"a76ec0f0-9b42-4e48-bd3f-68a74161cc4c\") " pod="openstack/ovn-controller-ovs-bddvk" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.509205 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.509681 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a76ec0f0-9b42-4e48-bd3f-68a74161cc4c-var-run\") pod \"ovn-controller-ovs-bddvk\" (UID: \"a76ec0f0-9b42-4e48-bd3f-68a74161cc4c\") " pod="openstack/ovn-controller-ovs-bddvk" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.509912 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a76ec0f0-9b42-4e48-bd3f-68a74161cc4c-etc-ovs\") pod \"ovn-controller-ovs-bddvk\" (UID: \"a76ec0f0-9b42-4e48-bd3f-68a74161cc4c\") " pod="openstack/ovn-controller-ovs-bddvk" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.510064 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a76ec0f0-9b42-4e48-bd3f-68a74161cc4c-var-lib\") pod \"ovn-controller-ovs-bddvk\" (UID: \"a76ec0f0-9b42-4e48-bd3f-68a74161cc4c\") " pod="openstack/ovn-controller-ovs-bddvk" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.511516 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.514425 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a76ec0f0-9b42-4e48-bd3f-68a74161cc4c-scripts\") pod \"ovn-controller-ovs-bddvk\" (UID: \"a76ec0f0-9b42-4e48-bd3f-68a74161cc4c\") " pod="openstack/ovn-controller-ovs-bddvk" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.519105 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.520525 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.520551 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-z775x" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.520620 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.520695 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.527776 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.530973 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srxrl\" (UniqueName: \"kubernetes.io/projected/a76ec0f0-9b42-4e48-bd3f-68a74161cc4c-kube-api-access-srxrl\") pod \"ovn-controller-ovs-bddvk\" (UID: \"a76ec0f0-9b42-4e48-bd3f-68a74161cc4c\") " pod="openstack/ovn-controller-ovs-bddvk" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.545769 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mvhxm" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.601134 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-bddvk" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.609733 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw7qm\" (UniqueName: \"kubernetes.io/projected/2e3fed93-4b85-4e79-a9d3-795dbd7695ba-kube-api-access-vw7qm\") pod \"ovsdbserver-nb-0\" (UID: \"2e3fed93-4b85-4e79-a9d3-795dbd7695ba\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.609827 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e3fed93-4b85-4e79-a9d3-795dbd7695ba-config\") pod \"ovsdbserver-nb-0\" (UID: \"2e3fed93-4b85-4e79-a9d3-795dbd7695ba\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.609849 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e3fed93-4b85-4e79-a9d3-795dbd7695ba-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2e3fed93-4b85-4e79-a9d3-795dbd7695ba\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.609891 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"2e3fed93-4b85-4e79-a9d3-795dbd7695ba\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.609920 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2e3fed93-4b85-4e79-a9d3-795dbd7695ba-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2e3fed93-4b85-4e79-a9d3-795dbd7695ba\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.610060 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e3fed93-4b85-4e79-a9d3-795dbd7695ba-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2e3fed93-4b85-4e79-a9d3-795dbd7695ba\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.610093 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e3fed93-4b85-4e79-a9d3-795dbd7695ba-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2e3fed93-4b85-4e79-a9d3-795dbd7695ba\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.610123 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e3fed93-4b85-4e79-a9d3-795dbd7695ba-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2e3fed93-4b85-4e79-a9d3-795dbd7695ba\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.712287 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw7qm\" (UniqueName: \"kubernetes.io/projected/2e3fed93-4b85-4e79-a9d3-795dbd7695ba-kube-api-access-vw7qm\") pod \"ovsdbserver-nb-0\" (UID: \"2e3fed93-4b85-4e79-a9d3-795dbd7695ba\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.712397 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e3fed93-4b85-4e79-a9d3-795dbd7695ba-config\") pod \"ovsdbserver-nb-0\" (UID: \"2e3fed93-4b85-4e79-a9d3-795dbd7695ba\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.712433 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e3fed93-4b85-4e79-a9d3-795dbd7695ba-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2e3fed93-4b85-4e79-a9d3-795dbd7695ba\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.712506 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"2e3fed93-4b85-4e79-a9d3-795dbd7695ba\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.712537 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2e3fed93-4b85-4e79-a9d3-795dbd7695ba-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2e3fed93-4b85-4e79-a9d3-795dbd7695ba\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.712609 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e3fed93-4b85-4e79-a9d3-795dbd7695ba-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2e3fed93-4b85-4e79-a9d3-795dbd7695ba\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.712639 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e3fed93-4b85-4e79-a9d3-795dbd7695ba-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2e3fed93-4b85-4e79-a9d3-795dbd7695ba\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.712669 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e3fed93-4b85-4e79-a9d3-795dbd7695ba-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2e3fed93-4b85-4e79-a9d3-795dbd7695ba\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.713024 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"2e3fed93-4b85-4e79-a9d3-795dbd7695ba\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-nb-0" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.713954 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e3fed93-4b85-4e79-a9d3-795dbd7695ba-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2e3fed93-4b85-4e79-a9d3-795dbd7695ba\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.714599 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e3fed93-4b85-4e79-a9d3-795dbd7695ba-config\") pod \"ovsdbserver-nb-0\" (UID: \"2e3fed93-4b85-4e79-a9d3-795dbd7695ba\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.714816 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2e3fed93-4b85-4e79-a9d3-795dbd7695ba-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2e3fed93-4b85-4e79-a9d3-795dbd7695ba\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.717547 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e3fed93-4b85-4e79-a9d3-795dbd7695ba-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2e3fed93-4b85-4e79-a9d3-795dbd7695ba\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.718049 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e3fed93-4b85-4e79-a9d3-795dbd7695ba-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2e3fed93-4b85-4e79-a9d3-795dbd7695ba\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.721259 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e3fed93-4b85-4e79-a9d3-795dbd7695ba-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2e3fed93-4b85-4e79-a9d3-795dbd7695ba\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.728756 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw7qm\" (UniqueName: \"kubernetes.io/projected/2e3fed93-4b85-4e79-a9d3-795dbd7695ba-kube-api-access-vw7qm\") pod \"ovsdbserver-nb-0\" (UID: \"2e3fed93-4b85-4e79-a9d3-795dbd7695ba\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.735214 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"2e3fed93-4b85-4e79-a9d3-795dbd7695ba\") " pod="openstack/ovsdbserver-nb-0" Feb 19 10:25:39 crc kubenswrapper[4811]: I0219 10:25:39.835777 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 19 10:25:41 crc kubenswrapper[4811]: E0219 10:25:41.513034 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 19 10:25:41 crc kubenswrapper[4811]: E0219 10:25:41.513485 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xhpqg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-rrkfr_openstack(0adfb997-3e8d-45cf-bf14-bc6f3ce105b9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 10:25:41 crc kubenswrapper[4811]: E0219 10:25:41.514646 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-rrkfr" podUID="0adfb997-3e8d-45cf-bf14-bc6f3ce105b9" Feb 19 10:25:41 crc kubenswrapper[4811]: E0219 10:25:41.564922 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 19 10:25:41 crc kubenswrapper[4811]: E0219 10:25:41.565085 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q2pqs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-tb8bf_openstack(1a6f078c-1170-4671-bf42-ce21332d2a3c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 10:25:41 crc kubenswrapper[4811]: E0219 10:25:41.566440 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-tb8bf" podUID="1a6f078c-1170-4671-bf42-ce21332d2a3c" Feb 19 10:25:41 crc kubenswrapper[4811]: I0219 10:25:41.770900 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 10:25:41 crc kubenswrapper[4811]: I0219 10:25:41.772914 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 19 10:25:41 crc kubenswrapper[4811]: I0219 10:25:41.777889 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 19 10:25:41 crc kubenswrapper[4811]: I0219 10:25:41.778236 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 19 10:25:41 crc kubenswrapper[4811]: I0219 10:25:41.778400 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-js795" Feb 19 10:25:41 crc kubenswrapper[4811]: I0219 10:25:41.778589 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 19 10:25:41 crc kubenswrapper[4811]: I0219 10:25:41.799627 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 10:25:41 crc kubenswrapper[4811]: I0219 10:25:41.851508 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a9b5380-d587-4d65-8ddb-b6c2faef624a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1a9b5380-d587-4d65-8ddb-b6c2faef624a\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:25:41 crc kubenswrapper[4811]: I0219 10:25:41.851566 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1a9b5380-d587-4d65-8ddb-b6c2faef624a\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:25:41 crc kubenswrapper[4811]: I0219 10:25:41.851602 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a9b5380-d587-4d65-8ddb-b6c2faef624a-config\") pod \"ovsdbserver-sb-0\" (UID: \"1a9b5380-d587-4d65-8ddb-b6c2faef624a\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:25:41 crc kubenswrapper[4811]: I0219 10:25:41.851627 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a9b5380-d587-4d65-8ddb-b6c2faef624a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1a9b5380-d587-4d65-8ddb-b6c2faef624a\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:25:41 crc kubenswrapper[4811]: I0219 10:25:41.851666 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a9b5380-d587-4d65-8ddb-b6c2faef624a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1a9b5380-d587-4d65-8ddb-b6c2faef624a\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:25:41 crc kubenswrapper[4811]: I0219 10:25:41.851717 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1a9b5380-d587-4d65-8ddb-b6c2faef624a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1a9b5380-d587-4d65-8ddb-b6c2faef624a\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:25:41 crc kubenswrapper[4811]: I0219 10:25:41.851743 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7msb\" (UniqueName: \"kubernetes.io/projected/1a9b5380-d587-4d65-8ddb-b6c2faef624a-kube-api-access-r7msb\") pod \"ovsdbserver-sb-0\" (UID: \"1a9b5380-d587-4d65-8ddb-b6c2faef624a\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:25:41 crc kubenswrapper[4811]: I0219 10:25:41.851768 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1a9b5380-d587-4d65-8ddb-b6c2faef624a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1a9b5380-d587-4d65-8ddb-b6c2faef624a\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:25:41 crc kubenswrapper[4811]: I0219 10:25:41.953958 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a9b5380-d587-4d65-8ddb-b6c2faef624a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1a9b5380-d587-4d65-8ddb-b6c2faef624a\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:25:41 crc kubenswrapper[4811]: I0219 10:25:41.954567 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1a9b5380-d587-4d65-8ddb-b6c2faef624a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1a9b5380-d587-4d65-8ddb-b6c2faef624a\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:25:41 crc kubenswrapper[4811]: I0219 10:25:41.954606 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7msb\" (UniqueName: \"kubernetes.io/projected/1a9b5380-d587-4d65-8ddb-b6c2faef624a-kube-api-access-r7msb\") pod \"ovsdbserver-sb-0\" (UID: \"1a9b5380-d587-4d65-8ddb-b6c2faef624a\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:25:41 crc kubenswrapper[4811]: I0219 10:25:41.954963 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1a9b5380-d587-4d65-8ddb-b6c2faef624a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1a9b5380-d587-4d65-8ddb-b6c2faef624a\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:25:41 crc kubenswrapper[4811]: I0219 10:25:41.955159 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a9b5380-d587-4d65-8ddb-b6c2faef624a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1a9b5380-d587-4d65-8ddb-b6c2faef624a\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:25:41 crc kubenswrapper[4811]: I0219 10:25:41.955197 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1a9b5380-d587-4d65-8ddb-b6c2faef624a\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:25:41 crc kubenswrapper[4811]: I0219 10:25:41.955229 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a9b5380-d587-4d65-8ddb-b6c2faef624a-config\") pod \"ovsdbserver-sb-0\" (UID: \"1a9b5380-d587-4d65-8ddb-b6c2faef624a\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:25:41 crc kubenswrapper[4811]: I0219 10:25:41.955258 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a9b5380-d587-4d65-8ddb-b6c2faef624a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1a9b5380-d587-4d65-8ddb-b6c2faef624a\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:25:41 crc kubenswrapper[4811]: I0219 10:25:41.955477 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1a9b5380-d587-4d65-8ddb-b6c2faef624a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1a9b5380-d587-4d65-8ddb-b6c2faef624a\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:25:41 crc kubenswrapper[4811]: I0219 10:25:41.955764 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1a9b5380-d587-4d65-8ddb-b6c2faef624a\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-sb-0" Feb 19 10:25:41 crc kubenswrapper[4811]: I0219 10:25:41.956787 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a9b5380-d587-4d65-8ddb-b6c2faef624a-config\") pod \"ovsdbserver-sb-0\" (UID: \"1a9b5380-d587-4d65-8ddb-b6c2faef624a\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:25:41 crc kubenswrapper[4811]: I0219 10:25:41.965121 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1a9b5380-d587-4d65-8ddb-b6c2faef624a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1a9b5380-d587-4d65-8ddb-b6c2faef624a\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:25:41 crc kubenswrapper[4811]: I0219 10:25:41.965253 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a9b5380-d587-4d65-8ddb-b6c2faef624a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1a9b5380-d587-4d65-8ddb-b6c2faef624a\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:25:41 crc kubenswrapper[4811]: I0219 10:25:41.965818 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a9b5380-d587-4d65-8ddb-b6c2faef624a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1a9b5380-d587-4d65-8ddb-b6c2faef624a\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:25:41 crc kubenswrapper[4811]: I0219 10:25:41.966037 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a9b5380-d587-4d65-8ddb-b6c2faef624a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1a9b5380-d587-4d65-8ddb-b6c2faef624a\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:25:41 crc kubenswrapper[4811]: I0219 10:25:41.988889 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1a9b5380-d587-4d65-8ddb-b6c2faef624a\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:25:41 crc kubenswrapper[4811]: I0219 10:25:41.989591 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7msb\" (UniqueName: \"kubernetes.io/projected/1a9b5380-d587-4d65-8ddb-b6c2faef624a-kube-api-access-r7msb\") pod \"ovsdbserver-sb-0\" (UID: \"1a9b5380-d587-4d65-8ddb-b6c2faef624a\") " pod="openstack/ovsdbserver-sb-0" Feb 19 10:25:42 crc kubenswrapper[4811]: I0219 10:25:42.176856 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 19 10:25:42 crc kubenswrapper[4811]: I0219 10:25:42.464283 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-rrkfr" Feb 19 10:25:42 crc kubenswrapper[4811]: I0219 10:25:42.476678 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-tb8bf" Feb 19 10:25:42 crc kubenswrapper[4811]: I0219 10:25:42.566329 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a6f078c-1170-4671-bf42-ce21332d2a3c-dns-svc\") pod \"1a6f078c-1170-4671-bf42-ce21332d2a3c\" (UID: \"1a6f078c-1170-4671-bf42-ce21332d2a3c\") " Feb 19 10:25:42 crc kubenswrapper[4811]: I0219 10:25:42.566540 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2pqs\" (UniqueName: \"kubernetes.io/projected/1a6f078c-1170-4671-bf42-ce21332d2a3c-kube-api-access-q2pqs\") pod \"1a6f078c-1170-4671-bf42-ce21332d2a3c\" (UID: \"1a6f078c-1170-4671-bf42-ce21332d2a3c\") " Feb 19 10:25:42 crc kubenswrapper[4811]: I0219 10:25:42.566573 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhpqg\" (UniqueName: \"kubernetes.io/projected/0adfb997-3e8d-45cf-bf14-bc6f3ce105b9-kube-api-access-xhpqg\") pod \"0adfb997-3e8d-45cf-bf14-bc6f3ce105b9\" (UID: \"0adfb997-3e8d-45cf-bf14-bc6f3ce105b9\") " Feb 19 10:25:42 crc kubenswrapper[4811]: I0219 10:25:42.566591 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a6f078c-1170-4671-bf42-ce21332d2a3c-config\") pod \"1a6f078c-1170-4671-bf42-ce21332d2a3c\" (UID: \"1a6f078c-1170-4671-bf42-ce21332d2a3c\") " Feb 19 10:25:42 crc kubenswrapper[4811]: I0219 10:25:42.566615 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0adfb997-3e8d-45cf-bf14-bc6f3ce105b9-config\") pod \"0adfb997-3e8d-45cf-bf14-bc6f3ce105b9\" (UID: \"0adfb997-3e8d-45cf-bf14-bc6f3ce105b9\") " Feb 19 10:25:42 crc kubenswrapper[4811]: I0219 10:25:42.566877 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a6f078c-1170-4671-bf42-ce21332d2a3c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1a6f078c-1170-4671-bf42-ce21332d2a3c" (UID: "1a6f078c-1170-4671-bf42-ce21332d2a3c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:25:42 crc kubenswrapper[4811]: I0219 10:25:42.567444 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a6f078c-1170-4671-bf42-ce21332d2a3c-config" (OuterVolumeSpecName: "config") pod "1a6f078c-1170-4671-bf42-ce21332d2a3c" (UID: "1a6f078c-1170-4671-bf42-ce21332d2a3c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:25:42 crc kubenswrapper[4811]: I0219 10:25:42.567572 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0adfb997-3e8d-45cf-bf14-bc6f3ce105b9-config" (OuterVolumeSpecName: "config") pod "0adfb997-3e8d-45cf-bf14-bc6f3ce105b9" (UID: "0adfb997-3e8d-45cf-bf14-bc6f3ce105b9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:25:42 crc kubenswrapper[4811]: I0219 10:25:42.573766 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0adfb997-3e8d-45cf-bf14-bc6f3ce105b9-kube-api-access-xhpqg" (OuterVolumeSpecName: "kube-api-access-xhpqg") pod "0adfb997-3e8d-45cf-bf14-bc6f3ce105b9" (UID: "0adfb997-3e8d-45cf-bf14-bc6f3ce105b9"). InnerVolumeSpecName "kube-api-access-xhpqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:25:42 crc kubenswrapper[4811]: I0219 10:25:42.574179 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a6f078c-1170-4671-bf42-ce21332d2a3c-kube-api-access-q2pqs" (OuterVolumeSpecName: "kube-api-access-q2pqs") pod "1a6f078c-1170-4671-bf42-ce21332d2a3c" (UID: "1a6f078c-1170-4671-bf42-ce21332d2a3c"). InnerVolumeSpecName "kube-api-access-q2pqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:25:42 crc kubenswrapper[4811]: I0219 10:25:42.618237 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 10:25:42 crc kubenswrapper[4811]: I0219 10:25:42.636545 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-qpggf"] Feb 19 10:25:42 crc kubenswrapper[4811]: I0219 10:25:42.652497 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 19 10:25:42 crc kubenswrapper[4811]: I0219 10:25:42.668514 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2pqs\" (UniqueName: \"kubernetes.io/projected/1a6f078c-1170-4671-bf42-ce21332d2a3c-kube-api-access-q2pqs\") on node \"crc\" DevicePath \"\"" Feb 19 10:25:42 crc kubenswrapper[4811]: I0219 10:25:42.668557 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhpqg\" (UniqueName: \"kubernetes.io/projected/0adfb997-3e8d-45cf-bf14-bc6f3ce105b9-kube-api-access-xhpqg\") on node \"crc\" DevicePath \"\"" Feb 19 10:25:42 crc kubenswrapper[4811]: I0219 10:25:42.668568 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a6f078c-1170-4671-bf42-ce21332d2a3c-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:25:42 crc kubenswrapper[4811]: I0219 10:25:42.668578 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0adfb997-3e8d-45cf-bf14-bc6f3ce105b9-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:25:42 crc kubenswrapper[4811]: I0219 10:25:42.668587 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a6f078c-1170-4671-bf42-ce21332d2a3c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:25:42 crc kubenswrapper[4811]: I0219 10:25:42.717104 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-b9vx7"] Feb 19 10:25:42 crc kubenswrapper[4811]: I0219 10:25:42.717191 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 10:25:42 crc kubenswrapper[4811]: I0219 10:25:42.741164 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 10:25:42 crc kubenswrapper[4811]: I0219 10:25:42.751611 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mvhxm"] Feb 19 10:25:42 crc kubenswrapper[4811]: I0219 10:25:42.772278 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6","Type":"ContainerStarted","Data":"6687bc1f8d6ca2df7f38ba61803a6f51c1256e28fbb5b2858f8d975df4bc05a5"} Feb 19 10:25:42 crc kubenswrapper[4811]: I0219 10:25:42.776290 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 10:25:42 crc kubenswrapper[4811]: I0219 10:25:42.796115 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 10:25:42 crc kubenswrapper[4811]: I0219 10:25:42.800696 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-rrkfr" event={"ID":"0adfb997-3e8d-45cf-bf14-bc6f3ce105b9","Type":"ContainerDied","Data":"4b3f61009148f1b25b22e7a31f7526ecc91e4651c0806fb5fd2691a87293e957"} Feb 19 10:25:42 crc kubenswrapper[4811]: I0219 10:25:42.800801 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-rrkfr" Feb 19 10:25:42 crc kubenswrapper[4811]: I0219 10:25:42.806898 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f263c1a8-dfa3-4383-849c-bde314ae5503","Type":"ContainerStarted","Data":"a4175518b21f1289bf245dc659c5bd01d715aa4e534e7ebcae47fad7a436d880"} Feb 19 10:25:42 crc kubenswrapper[4811]: I0219 10:25:42.808304 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-qpggf" event={"ID":"96d7a7f6-ff94-4361-b748-52474061017e","Type":"ContainerStarted","Data":"5a47380ea64e0bbb7e12ff4df71311887f54ec52fb20607de7bfe371f1d8ae61"} Feb 19 10:25:42 crc kubenswrapper[4811]: I0219 10:25:42.809203 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-tb8bf" event={"ID":"1a6f078c-1170-4671-bf42-ce21332d2a3c","Type":"ContainerDied","Data":"6f277ae5b897f0391aa89e3ac2d4488a82232574a7db89e1a01a3d73dbf2035a"} Feb 19 10:25:42 crc kubenswrapper[4811]: I0219 10:25:42.809275 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-tb8bf" Feb 19 10:25:42 crc kubenswrapper[4811]: I0219 10:25:42.888404 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 10:25:42 crc kubenswrapper[4811]: I0219 10:25:42.983138 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-bddvk"] Feb 19 10:25:43 crc kubenswrapper[4811]: W0219 10:25:43.010565 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda76ec0f0_9b42_4e48_bd3f_68a74161cc4c.slice/crio-9e5d765ad8e3b909e90a87862c21a95a6eca2d9787c823ac0b634ed66b652dd2 WatchSource:0}: Error finding container 9e5d765ad8e3b909e90a87862c21a95a6eca2d9787c823ac0b634ed66b652dd2: Status 404 returned error can't find the container with id 9e5d765ad8e3b909e90a87862c21a95a6eca2d9787c823ac0b634ed66b652dd2 Feb 19 10:25:43 crc kubenswrapper[4811]: I0219 10:25:43.028488 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-tb8bf"] Feb 19 10:25:43 crc kubenswrapper[4811]: I0219 10:25:43.031459 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-tb8bf"] Feb 19 10:25:43 crc kubenswrapper[4811]: I0219 10:25:43.057628 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rrkfr"] Feb 19 10:25:43 crc kubenswrapper[4811]: I0219 10:25:43.063625 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rrkfr"] Feb 19 10:25:43 crc kubenswrapper[4811]: I0219 10:25:43.822073 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d7378877-24bd-40fc-89c8-6bc6115040a2","Type":"ContainerStarted","Data":"5e91ae3b4782c822a31d69ac700b3045f54cf67c5797fa0fa3cd88c0a2216b92"} Feb 19 10:25:43 crc kubenswrapper[4811]: I0219 10:25:43.825713 4811 generic.go:334] "Generic (PLEG): container finished" podID="96d7a7f6-ff94-4361-b748-52474061017e" containerID="e83336d53d9dc0632ac6d9b98104088592f3ac74b3146c72acb23fcc17910099" exitCode=0 Feb 19 10:25:43 crc kubenswrapper[4811]: I0219 10:25:43.825808 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-qpggf" event={"ID":"96d7a7f6-ff94-4361-b748-52474061017e","Type":"ContainerDied","Data":"e83336d53d9dc0632ac6d9b98104088592f3ac74b3146c72acb23fcc17910099"} Feb 19 10:25:43 crc kubenswrapper[4811]: I0219 10:25:43.830196 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2e3fed93-4b85-4e79-a9d3-795dbd7695ba","Type":"ContainerStarted","Data":"00bf35ca1b2bfa79ce225396480b206c3d257040abe85a7fc8c47314a903d30d"} Feb 19 10:25:43 crc kubenswrapper[4811]: I0219 10:25:43.831515 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mvhxm" event={"ID":"00337a1f-39d0-4583-b35b-a50aeb572f60","Type":"ContainerStarted","Data":"6a4df90b17dcb3d677ef4e76109783f011a426bb3633b48f2e65095397b121be"} Feb 19 10:25:43 crc kubenswrapper[4811]: I0219 10:25:43.833610 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bddvk" event={"ID":"a76ec0f0-9b42-4e48-bd3f-68a74161cc4c","Type":"ContainerStarted","Data":"9e5d765ad8e3b909e90a87862c21a95a6eca2d9787c823ac0b634ed66b652dd2"} Feb 19 10:25:43 crc kubenswrapper[4811]: I0219 10:25:43.835348 4811 generic.go:334] "Generic (PLEG): container finished" podID="56394562-c333-4096-b46f-9f8b0313c765" containerID="38cf1a79ec564dd42009e15c34dafd4528299849362a2fe3b448da7ade05d4a2" exitCode=0 Feb 19 10:25:43 crc kubenswrapper[4811]: I0219 10:25:43.835396 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-b9vx7" event={"ID":"56394562-c333-4096-b46f-9f8b0313c765","Type":"ContainerDied","Data":"38cf1a79ec564dd42009e15c34dafd4528299849362a2fe3b448da7ade05d4a2"} Feb 19 10:25:43 crc kubenswrapper[4811]: I0219 10:25:43.835412 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-b9vx7" event={"ID":"56394562-c333-4096-b46f-9f8b0313c765","Type":"ContainerStarted","Data":"e029b92ed5e5b799bc73662d4c4c68a5287e21751b7531b891585d14b1a6957b"} Feb 19 10:25:43 crc kubenswrapper[4811]: I0219 10:25:43.837619 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"c2ff968e-6341-43f8-bf15-21009d76f661","Type":"ContainerStarted","Data":"bf7fe42e7932c818dd8142ffd5e6235d2823ea5b3b3398654597910d20c9ae51"} Feb 19 10:25:43 crc kubenswrapper[4811]: I0219 10:25:43.839032 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"354e40bc-0a2b-4654-873c-30b6293d6192","Type":"ContainerStarted","Data":"e7b4f8dd9b4e29e7d63e4b517941df578ca53ba15b8200cc552051e60f1e9f22"} Feb 19 10:25:43 crc kubenswrapper[4811]: I0219 10:25:43.843747 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"dc7e3241-6036-4ef1-9bba-1621a28623a7","Type":"ContainerStarted","Data":"8d47a8fbbe8206afbb102d09dd2718f57616dcef69806608092736d21788f948"} Feb 19 10:25:44 crc kubenswrapper[4811]: I0219 10:25:44.042868 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 10:25:44 crc kubenswrapper[4811]: W0219 10:25:44.193618 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a9b5380_d587_4d65_8ddb_b6c2faef624a.slice/crio-5f20984ef438d8ba6fa6e2e236235c7cbd99a923c488c47ece4faa6115a67984 WatchSource:0}: Error finding container 5f20984ef438d8ba6fa6e2e236235c7cbd99a923c488c47ece4faa6115a67984: Status 404 returned error can't find the container with id 5f20984ef438d8ba6fa6e2e236235c7cbd99a923c488c47ece4faa6115a67984 Feb 19 10:25:44 crc kubenswrapper[4811]: I0219 10:25:44.594716 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0adfb997-3e8d-45cf-bf14-bc6f3ce105b9" path="/var/lib/kubelet/pods/0adfb997-3e8d-45cf-bf14-bc6f3ce105b9/volumes" Feb 19 10:25:44 crc kubenswrapper[4811]: I0219 10:25:44.595119 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a6f078c-1170-4671-bf42-ce21332d2a3c" path="/var/lib/kubelet/pods/1a6f078c-1170-4671-bf42-ce21332d2a3c/volumes" Feb 19 10:25:44 crc kubenswrapper[4811]: I0219 10:25:44.855138 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1a9b5380-d587-4d65-8ddb-b6c2faef624a","Type":"ContainerStarted","Data":"5f20984ef438d8ba6fa6e2e236235c7cbd99a923c488c47ece4faa6115a67984"} Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.237642 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-qwhjn"] Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.264316 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-qwhjn"] Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.264471 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-qwhjn" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.268060 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.329573 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/764be2bf-8d0b-4783-afcd-de76bac7ce34-ovn-rundir\") pod \"ovn-controller-metrics-qwhjn\" (UID: \"764be2bf-8d0b-4783-afcd-de76bac7ce34\") " pod="openstack/ovn-controller-metrics-qwhjn" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.329647 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/764be2bf-8d0b-4783-afcd-de76bac7ce34-ovs-rundir\") pod \"ovn-controller-metrics-qwhjn\" (UID: \"764be2bf-8d0b-4783-afcd-de76bac7ce34\") " pod="openstack/ovn-controller-metrics-qwhjn" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.329675 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2nnw\" (UniqueName: \"kubernetes.io/projected/764be2bf-8d0b-4783-afcd-de76bac7ce34-kube-api-access-c2nnw\") pod \"ovn-controller-metrics-qwhjn\" (UID: \"764be2bf-8d0b-4783-afcd-de76bac7ce34\") " pod="openstack/ovn-controller-metrics-qwhjn" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.329702 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/764be2bf-8d0b-4783-afcd-de76bac7ce34-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-qwhjn\" (UID: \"764be2bf-8d0b-4783-afcd-de76bac7ce34\") " pod="openstack/ovn-controller-metrics-qwhjn" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.329745 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/764be2bf-8d0b-4783-afcd-de76bac7ce34-config\") pod \"ovn-controller-metrics-qwhjn\" (UID: \"764be2bf-8d0b-4783-afcd-de76bac7ce34\") " pod="openstack/ovn-controller-metrics-qwhjn" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.329772 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/764be2bf-8d0b-4783-afcd-de76bac7ce34-combined-ca-bundle\") pod \"ovn-controller-metrics-qwhjn\" (UID: \"764be2bf-8d0b-4783-afcd-de76bac7ce34\") " pod="openstack/ovn-controller-metrics-qwhjn" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.395090 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-qpggf"] Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.416194 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-s74fv"] Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.417520 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-s74fv" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.419582 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.434092 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2nnw\" (UniqueName: \"kubernetes.io/projected/764be2bf-8d0b-4783-afcd-de76bac7ce34-kube-api-access-c2nnw\") pod \"ovn-controller-metrics-qwhjn\" (UID: \"764be2bf-8d0b-4783-afcd-de76bac7ce34\") " pod="openstack/ovn-controller-metrics-qwhjn" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.434155 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/764be2bf-8d0b-4783-afcd-de76bac7ce34-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-qwhjn\" (UID: \"764be2bf-8d0b-4783-afcd-de76bac7ce34\") " pod="openstack/ovn-controller-metrics-qwhjn" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.434208 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcwnd\" (UniqueName: \"kubernetes.io/projected/668b2133-abf2-41a1-a9ee-3edd030b23ac-kube-api-access-mcwnd\") pod \"dnsmasq-dns-7fd796d7df-s74fv\" (UID: \"668b2133-abf2-41a1-a9ee-3edd030b23ac\") " pod="openstack/dnsmasq-dns-7fd796d7df-s74fv" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.434244 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/668b2133-abf2-41a1-a9ee-3edd030b23ac-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-s74fv\" (UID: \"668b2133-abf2-41a1-a9ee-3edd030b23ac\") " pod="openstack/dnsmasq-dns-7fd796d7df-s74fv" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.434284 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/764be2bf-8d0b-4783-afcd-de76bac7ce34-config\") pod \"ovn-controller-metrics-qwhjn\" (UID: \"764be2bf-8d0b-4783-afcd-de76bac7ce34\") " pod="openstack/ovn-controller-metrics-qwhjn" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.434323 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/764be2bf-8d0b-4783-afcd-de76bac7ce34-combined-ca-bundle\") pod \"ovn-controller-metrics-qwhjn\" (UID: \"764be2bf-8d0b-4783-afcd-de76bac7ce34\") " pod="openstack/ovn-controller-metrics-qwhjn" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.434371 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/668b2133-abf2-41a1-a9ee-3edd030b23ac-config\") pod \"dnsmasq-dns-7fd796d7df-s74fv\" (UID: \"668b2133-abf2-41a1-a9ee-3edd030b23ac\") " pod="openstack/dnsmasq-dns-7fd796d7df-s74fv" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.434468 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/764be2bf-8d0b-4783-afcd-de76bac7ce34-ovn-rundir\") pod \"ovn-controller-metrics-qwhjn\" (UID: \"764be2bf-8d0b-4783-afcd-de76bac7ce34\") " pod="openstack/ovn-controller-metrics-qwhjn" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.434541 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/764be2bf-8d0b-4783-afcd-de76bac7ce34-ovs-rundir\") pod \"ovn-controller-metrics-qwhjn\" (UID: \"764be2bf-8d0b-4783-afcd-de76bac7ce34\") " pod="openstack/ovn-controller-metrics-qwhjn" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.434571 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/668b2133-abf2-41a1-a9ee-3edd030b23ac-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-s74fv\" (UID: \"668b2133-abf2-41a1-a9ee-3edd030b23ac\") " pod="openstack/dnsmasq-dns-7fd796d7df-s74fv" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.434852 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-s74fv"] Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.436538 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/764be2bf-8d0b-4783-afcd-de76bac7ce34-ovn-rundir\") pod \"ovn-controller-metrics-qwhjn\" (UID: \"764be2bf-8d0b-4783-afcd-de76bac7ce34\") " pod="openstack/ovn-controller-metrics-qwhjn" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.436660 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/764be2bf-8d0b-4783-afcd-de76bac7ce34-ovs-rundir\") pod \"ovn-controller-metrics-qwhjn\" (UID: \"764be2bf-8d0b-4783-afcd-de76bac7ce34\") " pod="openstack/ovn-controller-metrics-qwhjn" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.437294 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/764be2bf-8d0b-4783-afcd-de76bac7ce34-config\") pod \"ovn-controller-metrics-qwhjn\" (UID: \"764be2bf-8d0b-4783-afcd-de76bac7ce34\") " pod="openstack/ovn-controller-metrics-qwhjn" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.445978 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/764be2bf-8d0b-4783-afcd-de76bac7ce34-combined-ca-bundle\") pod \"ovn-controller-metrics-qwhjn\" (UID: \"764be2bf-8d0b-4783-afcd-de76bac7ce34\") " pod="openstack/ovn-controller-metrics-qwhjn" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.461326 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/764be2bf-8d0b-4783-afcd-de76bac7ce34-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-qwhjn\" (UID: \"764be2bf-8d0b-4783-afcd-de76bac7ce34\") " pod="openstack/ovn-controller-metrics-qwhjn" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.480879 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2nnw\" (UniqueName: \"kubernetes.io/projected/764be2bf-8d0b-4783-afcd-de76bac7ce34-kube-api-access-c2nnw\") pod \"ovn-controller-metrics-qwhjn\" (UID: \"764be2bf-8d0b-4783-afcd-de76bac7ce34\") " pod="openstack/ovn-controller-metrics-qwhjn" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.536664 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcwnd\" (UniqueName: \"kubernetes.io/projected/668b2133-abf2-41a1-a9ee-3edd030b23ac-kube-api-access-mcwnd\") pod \"dnsmasq-dns-7fd796d7df-s74fv\" (UID: \"668b2133-abf2-41a1-a9ee-3edd030b23ac\") " pod="openstack/dnsmasq-dns-7fd796d7df-s74fv" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.536761 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/668b2133-abf2-41a1-a9ee-3edd030b23ac-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-s74fv\" (UID: \"668b2133-abf2-41a1-a9ee-3edd030b23ac\") " pod="openstack/dnsmasq-dns-7fd796d7df-s74fv" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.536875 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/668b2133-abf2-41a1-a9ee-3edd030b23ac-config\") pod \"dnsmasq-dns-7fd796d7df-s74fv\" (UID: \"668b2133-abf2-41a1-a9ee-3edd030b23ac\") " pod="openstack/dnsmasq-dns-7fd796d7df-s74fv" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.536996 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/668b2133-abf2-41a1-a9ee-3edd030b23ac-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-s74fv\" (UID: \"668b2133-abf2-41a1-a9ee-3edd030b23ac\") " pod="openstack/dnsmasq-dns-7fd796d7df-s74fv" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.538227 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/668b2133-abf2-41a1-a9ee-3edd030b23ac-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-s74fv\" (UID: \"668b2133-abf2-41a1-a9ee-3edd030b23ac\") " pod="openstack/dnsmasq-dns-7fd796d7df-s74fv" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.539026 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/668b2133-abf2-41a1-a9ee-3edd030b23ac-config\") pod \"dnsmasq-dns-7fd796d7df-s74fv\" (UID: \"668b2133-abf2-41a1-a9ee-3edd030b23ac\") " pod="openstack/dnsmasq-dns-7fd796d7df-s74fv" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.539657 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/668b2133-abf2-41a1-a9ee-3edd030b23ac-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-s74fv\" (UID: \"668b2133-abf2-41a1-a9ee-3edd030b23ac\") " pod="openstack/dnsmasq-dns-7fd796d7df-s74fv" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.555009 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcwnd\" (UniqueName: \"kubernetes.io/projected/668b2133-abf2-41a1-a9ee-3edd030b23ac-kube-api-access-mcwnd\") pod \"dnsmasq-dns-7fd796d7df-s74fv\" (UID: \"668b2133-abf2-41a1-a9ee-3edd030b23ac\") " pod="openstack/dnsmasq-dns-7fd796d7df-s74fv" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.563442 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-b9vx7"] Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.596019 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-6js64"] Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.596001 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-qwhjn" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.602255 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-6js64" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.623543 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.654125 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33876f22-ea5f-424b-8f1a-fb0058bf3384-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-6js64\" (UID: \"33876f22-ea5f-424b-8f1a-fb0058bf3384\") " pod="openstack/dnsmasq-dns-86db49b7ff-6js64" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.654315 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mstrp\" (UniqueName: \"kubernetes.io/projected/33876f22-ea5f-424b-8f1a-fb0058bf3384-kube-api-access-mstrp\") pod \"dnsmasq-dns-86db49b7ff-6js64\" (UID: \"33876f22-ea5f-424b-8f1a-fb0058bf3384\") " pod="openstack/dnsmasq-dns-86db49b7ff-6js64" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.654728 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33876f22-ea5f-424b-8f1a-fb0058bf3384-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-6js64\" (UID: \"33876f22-ea5f-424b-8f1a-fb0058bf3384\") " pod="openstack/dnsmasq-dns-86db49b7ff-6js64" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.655436 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33876f22-ea5f-424b-8f1a-fb0058bf3384-config\") pod \"dnsmasq-dns-86db49b7ff-6js64\" (UID: \"33876f22-ea5f-424b-8f1a-fb0058bf3384\") " pod="openstack/dnsmasq-dns-86db49b7ff-6js64" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.655690 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33876f22-ea5f-424b-8f1a-fb0058bf3384-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-6js64\" (UID: \"33876f22-ea5f-424b-8f1a-fb0058bf3384\") " pod="openstack/dnsmasq-dns-86db49b7ff-6js64" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.657795 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-6js64"] Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.757413 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33876f22-ea5f-424b-8f1a-fb0058bf3384-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-6js64\" (UID: \"33876f22-ea5f-424b-8f1a-fb0058bf3384\") " pod="openstack/dnsmasq-dns-86db49b7ff-6js64" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.758059 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33876f22-ea5f-424b-8f1a-fb0058bf3384-config\") pod \"dnsmasq-dns-86db49b7ff-6js64\" (UID: \"33876f22-ea5f-424b-8f1a-fb0058bf3384\") " pod="openstack/dnsmasq-dns-86db49b7ff-6js64" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.758125 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33876f22-ea5f-424b-8f1a-fb0058bf3384-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-6js64\" (UID: \"33876f22-ea5f-424b-8f1a-fb0058bf3384\") " pod="openstack/dnsmasq-dns-86db49b7ff-6js64" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.758268 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33876f22-ea5f-424b-8f1a-fb0058bf3384-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-6js64\" (UID: \"33876f22-ea5f-424b-8f1a-fb0058bf3384\") " pod="openstack/dnsmasq-dns-86db49b7ff-6js64" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.758302 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mstrp\" (UniqueName: \"kubernetes.io/projected/33876f22-ea5f-424b-8f1a-fb0058bf3384-kube-api-access-mstrp\") pod \"dnsmasq-dns-86db49b7ff-6js64\" (UID: \"33876f22-ea5f-424b-8f1a-fb0058bf3384\") " pod="openstack/dnsmasq-dns-86db49b7ff-6js64" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.759402 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33876f22-ea5f-424b-8f1a-fb0058bf3384-config\") pod \"dnsmasq-dns-86db49b7ff-6js64\" (UID: \"33876f22-ea5f-424b-8f1a-fb0058bf3384\") " pod="openstack/dnsmasq-dns-86db49b7ff-6js64" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.759629 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33876f22-ea5f-424b-8f1a-fb0058bf3384-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-6js64\" (UID: \"33876f22-ea5f-424b-8f1a-fb0058bf3384\") " pod="openstack/dnsmasq-dns-86db49b7ff-6js64" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.759733 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33876f22-ea5f-424b-8f1a-fb0058bf3384-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-6js64\" (UID: \"33876f22-ea5f-424b-8f1a-fb0058bf3384\") " pod="openstack/dnsmasq-dns-86db49b7ff-6js64" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.760202 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33876f22-ea5f-424b-8f1a-fb0058bf3384-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-6js64\" (UID: \"33876f22-ea5f-424b-8f1a-fb0058bf3384\") " pod="openstack/dnsmasq-dns-86db49b7ff-6js64" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.777812 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mstrp\" (UniqueName: \"kubernetes.io/projected/33876f22-ea5f-424b-8f1a-fb0058bf3384-kube-api-access-mstrp\") pod \"dnsmasq-dns-86db49b7ff-6js64\" (UID: \"33876f22-ea5f-424b-8f1a-fb0058bf3384\") " pod="openstack/dnsmasq-dns-86db49b7ff-6js64" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.818846 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-s74fv" Feb 19 10:25:51 crc kubenswrapper[4811]: I0219 10:25:51.962172 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-6js64" Feb 19 10:25:52 crc kubenswrapper[4811]: I0219 10:25:52.378398 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-qwhjn"] Feb 19 10:25:52 crc kubenswrapper[4811]: I0219 10:25:52.398646 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-s74fv"] Feb 19 10:25:52 crc kubenswrapper[4811]: I0219 10:25:52.724497 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-6js64"] Feb 19 10:25:52 crc kubenswrapper[4811]: I0219 10:25:52.922101 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-qwhjn" event={"ID":"764be2bf-8d0b-4783-afcd-de76bac7ce34","Type":"ContainerStarted","Data":"a0c2b9699b0e0dc5a7fb6badbd62ec14c3d32abb1a3cbb6a60a06c4c1388e0f3"} Feb 19 10:25:52 crc kubenswrapper[4811]: I0219 10:25:52.923974 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"dc7e3241-6036-4ef1-9bba-1621a28623a7","Type":"ContainerStarted","Data":"bb0dd2f210291ef068a155356f790adfeac0762f139e1e84d4bbec15669edc44"} Feb 19 10:25:52 crc kubenswrapper[4811]: I0219 10:25:52.924180 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 19 10:25:52 crc kubenswrapper[4811]: I0219 10:25:52.926858 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mvhxm" event={"ID":"00337a1f-39d0-4583-b35b-a50aeb572f60","Type":"ContainerStarted","Data":"e04ad3a7e1528ceec858a479a4718bd6c9d0df1d8abd28501eeb2dda6b886411"} Feb 19 10:25:52 crc kubenswrapper[4811]: I0219 10:25:52.927287 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-mvhxm" Feb 19 10:25:52 crc kubenswrapper[4811]: I0219 10:25:52.929319 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-s74fv" event={"ID":"668b2133-abf2-41a1-a9ee-3edd030b23ac","Type":"ContainerStarted","Data":"0e3b9595b8272af16f3f9bbf85fd649f46ddd7a9ccb2041f3065f8f9a6db3b43"} Feb 19 10:25:52 crc kubenswrapper[4811]: I0219 10:25:52.930527 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-6js64" event={"ID":"33876f22-ea5f-424b-8f1a-fb0058bf3384","Type":"ContainerStarted","Data":"9a195c2c060e64cff3e45ccbf3ca9bdfd594cc11bfa273b489eb22330cffeb85"} Feb 19 10:25:52 crc kubenswrapper[4811]: I0219 10:25:52.936202 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-qpggf" event={"ID":"96d7a7f6-ff94-4361-b748-52474061017e","Type":"ContainerStarted","Data":"7d4ac5f8fdd93d376f46ced842f11ca3dc632c4bcf2d1273f888067e0d25a836"} Feb 19 10:25:52 crc kubenswrapper[4811]: I0219 10:25:52.936606 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-qpggf" podUID="96d7a7f6-ff94-4361-b748-52474061017e" containerName="dnsmasq-dns" containerID="cri-o://7d4ac5f8fdd93d376f46ced842f11ca3dc632c4bcf2d1273f888067e0d25a836" gracePeriod=10 Feb 19 10:25:52 crc kubenswrapper[4811]: I0219 10:25:52.937468 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-qpggf" Feb 19 10:25:52 crc kubenswrapper[4811]: I0219 10:25:52.945239 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"c2ff968e-6341-43f8-bf15-21009d76f661","Type":"ContainerStarted","Data":"166d5033dbc37b4474ab2314ab5956966c79de7d13946998e323a7cc0d439768"} Feb 19 10:25:52 crc kubenswrapper[4811]: I0219 10:25:52.946399 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 19 10:25:52 crc kubenswrapper[4811]: I0219 10:25:52.952716 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=8.695186196 podStartE2EDuration="17.952696941s" podCreationTimestamp="2026-02-19 10:25:35 +0000 UTC" firstStartedPulling="2026-02-19 10:25:42.952599952 +0000 UTC m=+1031.479064638" lastFinishedPulling="2026-02-19 10:25:52.210110697 +0000 UTC m=+1040.736575383" observedRunningTime="2026-02-19 10:25:52.938675264 +0000 UTC m=+1041.465139950" watchObservedRunningTime="2026-02-19 10:25:52.952696941 +0000 UTC m=+1041.479161627" Feb 19 10:25:52 crc kubenswrapper[4811]: I0219 10:25:52.963813 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1a9b5380-d587-4d65-8ddb-b6c2faef624a","Type":"ContainerStarted","Data":"e9d51ecd7f04d2a59d51559019b2cd5d374cf73a1c7e20c560d3b71ac51236ac"} Feb 19 10:25:52 crc kubenswrapper[4811]: I0219 10:25:52.964610 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-mvhxm" podStartSLOduration=6.033576362 podStartE2EDuration="13.964589135s" podCreationTimestamp="2026-02-19 10:25:39 +0000 UTC" firstStartedPulling="2026-02-19 10:25:42.952027997 +0000 UTC m=+1031.478492673" lastFinishedPulling="2026-02-19 10:25:50.88304075 +0000 UTC m=+1039.409505446" observedRunningTime="2026-02-19 10:25:52.961866905 +0000 UTC m=+1041.488331601" watchObservedRunningTime="2026-02-19 10:25:52.964589135 +0000 UTC m=+1041.491053821" Feb 19 10:25:52 crc kubenswrapper[4811]: I0219 10:25:52.971244 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6","Type":"ContainerStarted","Data":"c50d6fce696dc9d97341aca26331296ca144977e1c5741a9ec05a490a514e8b6"} Feb 19 10:25:52 crc kubenswrapper[4811]: I0219 10:25:52.973026 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bddvk" event={"ID":"a76ec0f0-9b42-4e48-bd3f-68a74161cc4c","Type":"ContainerStarted","Data":"7400e7e6ed8e8bd9944aa1ff38db261b7430f444202034efc7914bfd24a0372f"} Feb 19 10:25:52 crc kubenswrapper[4811]: I0219 10:25:52.981210 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-b9vx7" podUID="56394562-c333-4096-b46f-9f8b0313c765" containerName="dnsmasq-dns" containerID="cri-o://5366dc6ff6e1f0d78bc7b2ad4219cc82f10e878572460741b7a5c9fe75559e89" gracePeriod=10 Feb 19 10:25:52 crc kubenswrapper[4811]: I0219 10:25:52.981559 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-b9vx7" event={"ID":"56394562-c333-4096-b46f-9f8b0313c765","Type":"ContainerStarted","Data":"5366dc6ff6e1f0d78bc7b2ad4219cc82f10e878572460741b7a5c9fe75559e89"} Feb 19 10:25:52 crc kubenswrapper[4811]: I0219 10:25:52.981588 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-b9vx7" Feb 19 10:25:52 crc kubenswrapper[4811]: I0219 10:25:52.995047 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2e3fed93-4b85-4e79-a9d3-795dbd7695ba","Type":"ContainerStarted","Data":"305184ee67c07869cc995f879970aee0b93651057b140c85afbd0e615c28f839"} Feb 19 10:25:52 crc kubenswrapper[4811]: I0219 10:25:52.997769 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-qpggf" podStartSLOduration=24.524661151 podStartE2EDuration="24.99773502s" podCreationTimestamp="2026-02-19 10:25:28 +0000 UTC" firstStartedPulling="2026-02-19 10:25:42.624957473 +0000 UTC m=+1031.151422159" lastFinishedPulling="2026-02-19 10:25:43.098031332 +0000 UTC m=+1031.624496028" observedRunningTime="2026-02-19 10:25:52.988961627 +0000 UTC m=+1041.515426313" watchObservedRunningTime="2026-02-19 10:25:52.99773502 +0000 UTC m=+1041.524199706" Feb 19 10:25:53 crc kubenswrapper[4811]: I0219 10:25:53.003766 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"354e40bc-0a2b-4654-873c-30b6293d6192","Type":"ContainerStarted","Data":"37ce36ed8230c8686e8c3a4a494ee24bbb43e389fe5c8b3449a1602efd81ab53"} Feb 19 10:25:53 crc kubenswrapper[4811]: I0219 10:25:53.032885 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-b9vx7" podStartSLOduration=23.511593168 podStartE2EDuration="24.032830056s" podCreationTimestamp="2026-02-19 10:25:29 +0000 UTC" firstStartedPulling="2026-02-19 10:25:42.922713089 +0000 UTC m=+1031.449177775" lastFinishedPulling="2026-02-19 10:25:43.443949977 +0000 UTC m=+1031.970414663" observedRunningTime="2026-02-19 10:25:53.029175033 +0000 UTC m=+1041.555639719" watchObservedRunningTime="2026-02-19 10:25:53.032830056 +0000 UTC m=+1041.559294742" Feb 19 10:25:53 crc kubenswrapper[4811]: I0219 10:25:53.051632 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=13.550481129 podStartE2EDuration="21.051598795s" podCreationTimestamp="2026-02-19 10:25:32 +0000 UTC" firstStartedPulling="2026-02-19 10:25:42.922969056 +0000 UTC m=+1031.449433742" lastFinishedPulling="2026-02-19 10:25:50.424086722 +0000 UTC m=+1038.950551408" observedRunningTime="2026-02-19 10:25:53.048080165 +0000 UTC m=+1041.574544851" watchObservedRunningTime="2026-02-19 10:25:53.051598795 +0000 UTC m=+1041.578063481" Feb 19 10:25:53 crc kubenswrapper[4811]: I0219 10:25:53.820968 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-qpggf" Feb 19 10:25:53 crc kubenswrapper[4811]: I0219 10:25:53.827291 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-b9vx7" Feb 19 10:25:53 crc kubenswrapper[4811]: I0219 10:25:53.924237 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q54rv\" (UniqueName: \"kubernetes.io/projected/96d7a7f6-ff94-4361-b748-52474061017e-kube-api-access-q54rv\") pod \"96d7a7f6-ff94-4361-b748-52474061017e\" (UID: \"96d7a7f6-ff94-4361-b748-52474061017e\") " Feb 19 10:25:53 crc kubenswrapper[4811]: I0219 10:25:53.924302 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pk7qf\" (UniqueName: \"kubernetes.io/projected/56394562-c333-4096-b46f-9f8b0313c765-kube-api-access-pk7qf\") pod \"56394562-c333-4096-b46f-9f8b0313c765\" (UID: \"56394562-c333-4096-b46f-9f8b0313c765\") " Feb 19 10:25:53 crc kubenswrapper[4811]: I0219 10:25:53.924420 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56394562-c333-4096-b46f-9f8b0313c765-config\") pod \"56394562-c333-4096-b46f-9f8b0313c765\" (UID: \"56394562-c333-4096-b46f-9f8b0313c765\") " Feb 19 10:25:53 crc kubenswrapper[4811]: I0219 10:25:53.924586 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96d7a7f6-ff94-4361-b748-52474061017e-config\") pod \"96d7a7f6-ff94-4361-b748-52474061017e\" (UID: \"96d7a7f6-ff94-4361-b748-52474061017e\") " Feb 19 10:25:53 crc kubenswrapper[4811]: I0219 10:25:53.924734 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96d7a7f6-ff94-4361-b748-52474061017e-dns-svc\") pod \"96d7a7f6-ff94-4361-b748-52474061017e\" (UID: \"96d7a7f6-ff94-4361-b748-52474061017e\") " Feb 19 10:25:53 crc kubenswrapper[4811]: I0219 10:25:53.924779 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56394562-c333-4096-b46f-9f8b0313c765-dns-svc\") pod \"56394562-c333-4096-b46f-9f8b0313c765\" (UID: \"56394562-c333-4096-b46f-9f8b0313c765\") " Feb 19 10:25:53 crc kubenswrapper[4811]: I0219 10:25:53.940854 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96d7a7f6-ff94-4361-b748-52474061017e-kube-api-access-q54rv" (OuterVolumeSpecName: "kube-api-access-q54rv") pod "96d7a7f6-ff94-4361-b748-52474061017e" (UID: "96d7a7f6-ff94-4361-b748-52474061017e"). InnerVolumeSpecName "kube-api-access-q54rv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:25:53 crc kubenswrapper[4811]: I0219 10:25:53.948156 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56394562-c333-4096-b46f-9f8b0313c765-kube-api-access-pk7qf" (OuterVolumeSpecName: "kube-api-access-pk7qf") pod "56394562-c333-4096-b46f-9f8b0313c765" (UID: "56394562-c333-4096-b46f-9f8b0313c765"). InnerVolumeSpecName "kube-api-access-pk7qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:25:53 crc kubenswrapper[4811]: I0219 10:25:53.975201 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96d7a7f6-ff94-4361-b748-52474061017e-config" (OuterVolumeSpecName: "config") pod "96d7a7f6-ff94-4361-b748-52474061017e" (UID: "96d7a7f6-ff94-4361-b748-52474061017e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:25:53 crc kubenswrapper[4811]: I0219 10:25:53.975643 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56394562-c333-4096-b46f-9f8b0313c765-config" (OuterVolumeSpecName: "config") pod "56394562-c333-4096-b46f-9f8b0313c765" (UID: "56394562-c333-4096-b46f-9f8b0313c765"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:25:53 crc kubenswrapper[4811]: I0219 10:25:53.987542 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56394562-c333-4096-b46f-9f8b0313c765-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "56394562-c333-4096-b46f-9f8b0313c765" (UID: "56394562-c333-4096-b46f-9f8b0313c765"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:25:53 crc kubenswrapper[4811]: I0219 10:25:53.988052 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96d7a7f6-ff94-4361-b748-52474061017e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "96d7a7f6-ff94-4361-b748-52474061017e" (UID: "96d7a7f6-ff94-4361-b748-52474061017e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:25:54 crc kubenswrapper[4811]: I0219 10:25:54.017266 4811 generic.go:334] "Generic (PLEG): container finished" podID="56394562-c333-4096-b46f-9f8b0313c765" containerID="5366dc6ff6e1f0d78bc7b2ad4219cc82f10e878572460741b7a5c9fe75559e89" exitCode=0 Feb 19 10:25:54 crc kubenswrapper[4811]: I0219 10:25:54.017359 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-b9vx7" Feb 19 10:25:54 crc kubenswrapper[4811]: I0219 10:25:54.017364 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-b9vx7" event={"ID":"56394562-c333-4096-b46f-9f8b0313c765","Type":"ContainerDied","Data":"5366dc6ff6e1f0d78bc7b2ad4219cc82f10e878572460741b7a5c9fe75559e89"} Feb 19 10:25:54 crc kubenswrapper[4811]: I0219 10:25:54.017482 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-b9vx7" event={"ID":"56394562-c333-4096-b46f-9f8b0313c765","Type":"ContainerDied","Data":"e029b92ed5e5b799bc73662d4c4c68a5287e21751b7531b891585d14b1a6957b"} Feb 19 10:25:54 crc kubenswrapper[4811]: I0219 10:25:54.017500 4811 scope.go:117] "RemoveContainer" containerID="5366dc6ff6e1f0d78bc7b2ad4219cc82f10e878572460741b7a5c9fe75559e89" Feb 19 10:25:54 crc kubenswrapper[4811]: I0219 10:25:54.020302 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f263c1a8-dfa3-4383-849c-bde314ae5503","Type":"ContainerStarted","Data":"f3e9158665e56adf6d6316c3aa8190c955795799012ad86edea35e82b7ad74e8"} Feb 19 10:25:54 crc kubenswrapper[4811]: I0219 10:25:54.025425 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-qpggf" Feb 19 10:25:54 crc kubenswrapper[4811]: I0219 10:25:54.025480 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-qpggf" event={"ID":"96d7a7f6-ff94-4361-b748-52474061017e","Type":"ContainerDied","Data":"7d4ac5f8fdd93d376f46ced842f11ca3dc632c4bcf2d1273f888067e0d25a836"} Feb 19 10:25:54 crc kubenswrapper[4811]: I0219 10:25:54.025249 4811 generic.go:334] "Generic (PLEG): container finished" podID="96d7a7f6-ff94-4361-b748-52474061017e" containerID="7d4ac5f8fdd93d376f46ced842f11ca3dc632c4bcf2d1273f888067e0d25a836" exitCode=0 Feb 19 10:25:54 crc kubenswrapper[4811]: I0219 10:25:54.025729 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-qpggf" event={"ID":"96d7a7f6-ff94-4361-b748-52474061017e","Type":"ContainerDied","Data":"5a47380ea64e0bbb7e12ff4df71311887f54ec52fb20607de7bfe371f1d8ae61"} Feb 19 10:25:54 crc kubenswrapper[4811]: I0219 10:25:54.026502 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96d7a7f6-ff94-4361-b748-52474061017e-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:25:54 crc kubenswrapper[4811]: I0219 10:25:54.026526 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96d7a7f6-ff94-4361-b748-52474061017e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:25:54 crc kubenswrapper[4811]: I0219 10:25:54.026536 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56394562-c333-4096-b46f-9f8b0313c765-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:25:54 crc kubenswrapper[4811]: I0219 10:25:54.026595 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q54rv\" (UniqueName: \"kubernetes.io/projected/96d7a7f6-ff94-4361-b748-52474061017e-kube-api-access-q54rv\") on node \"crc\" DevicePath \"\"" Feb 19 10:25:54 crc kubenswrapper[4811]: I0219 10:25:54.026611 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pk7qf\" (UniqueName: \"kubernetes.io/projected/56394562-c333-4096-b46f-9f8b0313c765-kube-api-access-pk7qf\") on node \"crc\" DevicePath \"\"" Feb 19 10:25:54 crc kubenswrapper[4811]: I0219 10:25:54.026620 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56394562-c333-4096-b46f-9f8b0313c765-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:25:54 crc kubenswrapper[4811]: I0219 10:25:54.027552 4811 generic.go:334] "Generic (PLEG): container finished" podID="668b2133-abf2-41a1-a9ee-3edd030b23ac" containerID="5c20456bd0c71501dc352c84bb24e5d28078d4c1826a439039c1221124d98725" exitCode=0 Feb 19 10:25:54 crc kubenswrapper[4811]: I0219 10:25:54.027613 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-s74fv" event={"ID":"668b2133-abf2-41a1-a9ee-3edd030b23ac","Type":"ContainerDied","Data":"5c20456bd0c71501dc352c84bb24e5d28078d4c1826a439039c1221124d98725"} Feb 19 10:25:54 crc kubenswrapper[4811]: I0219 10:25:54.032616 4811 generic.go:334] "Generic (PLEG): container finished" podID="a76ec0f0-9b42-4e48-bd3f-68a74161cc4c" containerID="7400e7e6ed8e8bd9944aa1ff38db261b7430f444202034efc7914bfd24a0372f" exitCode=0 Feb 19 10:25:54 crc kubenswrapper[4811]: I0219 10:25:54.033090 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bddvk" event={"ID":"a76ec0f0-9b42-4e48-bd3f-68a74161cc4c","Type":"ContainerDied","Data":"7400e7e6ed8e8bd9944aa1ff38db261b7430f444202034efc7914bfd24a0372f"} Feb 19 10:25:54 crc kubenswrapper[4811]: I0219 10:25:54.037219 4811 generic.go:334] "Generic (PLEG): container finished" podID="33876f22-ea5f-424b-8f1a-fb0058bf3384" containerID="45d14b4cd54548216e1eea25b511a21708268bacaec4cda5ae12e0ad173b1f3c" exitCode=0 Feb 19 10:25:54 crc kubenswrapper[4811]: I0219 10:25:54.037298 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-6js64" event={"ID":"33876f22-ea5f-424b-8f1a-fb0058bf3384","Type":"ContainerDied","Data":"45d14b4cd54548216e1eea25b511a21708268bacaec4cda5ae12e0ad173b1f3c"} Feb 19 10:25:54 crc kubenswrapper[4811]: I0219 10:25:54.041198 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d7378877-24bd-40fc-89c8-6bc6115040a2","Type":"ContainerStarted","Data":"d9655df4b91bafa3e771ef345d5f6edc51398eec9e08ccdc8f1eec6c229e5dc4"} Feb 19 10:25:54 crc kubenswrapper[4811]: I0219 10:25:54.185722 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-qpggf"] Feb 19 10:25:54 crc kubenswrapper[4811]: I0219 10:25:54.202438 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-qpggf"] Feb 19 10:25:54 crc kubenswrapper[4811]: I0219 10:25:54.222995 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-b9vx7"] Feb 19 10:25:54 crc kubenswrapper[4811]: I0219 10:25:54.232044 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-b9vx7"] Feb 19 10:25:54 crc kubenswrapper[4811]: I0219 10:25:54.595294 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56394562-c333-4096-b46f-9f8b0313c765" path="/var/lib/kubelet/pods/56394562-c333-4096-b46f-9f8b0313c765/volumes" Feb 19 10:25:54 crc kubenswrapper[4811]: I0219 10:25:54.596040 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96d7a7f6-ff94-4361-b748-52474061017e" path="/var/lib/kubelet/pods/96d7a7f6-ff94-4361-b748-52474061017e/volumes" Feb 19 10:25:54 crc kubenswrapper[4811]: I0219 10:25:54.687753 4811 scope.go:117] "RemoveContainer" containerID="38cf1a79ec564dd42009e15c34dafd4528299849362a2fe3b448da7ade05d4a2" Feb 19 10:25:54 crc kubenswrapper[4811]: I0219 10:25:54.794955 4811 scope.go:117] "RemoveContainer" containerID="5366dc6ff6e1f0d78bc7b2ad4219cc82f10e878572460741b7a5c9fe75559e89" Feb 19 10:25:54 crc kubenswrapper[4811]: E0219 10:25:54.801210 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5366dc6ff6e1f0d78bc7b2ad4219cc82f10e878572460741b7a5c9fe75559e89\": container with ID starting with 5366dc6ff6e1f0d78bc7b2ad4219cc82f10e878572460741b7a5c9fe75559e89 not found: ID does not exist" containerID="5366dc6ff6e1f0d78bc7b2ad4219cc82f10e878572460741b7a5c9fe75559e89" Feb 19 10:25:54 crc kubenswrapper[4811]: I0219 10:25:54.801280 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5366dc6ff6e1f0d78bc7b2ad4219cc82f10e878572460741b7a5c9fe75559e89"} err="failed to get container status \"5366dc6ff6e1f0d78bc7b2ad4219cc82f10e878572460741b7a5c9fe75559e89\": rpc error: code = NotFound desc = could not find container \"5366dc6ff6e1f0d78bc7b2ad4219cc82f10e878572460741b7a5c9fe75559e89\": container with ID starting with 5366dc6ff6e1f0d78bc7b2ad4219cc82f10e878572460741b7a5c9fe75559e89 not found: ID does not exist" Feb 19 10:25:54 crc kubenswrapper[4811]: I0219 10:25:54.801324 4811 scope.go:117] "RemoveContainer" containerID="38cf1a79ec564dd42009e15c34dafd4528299849362a2fe3b448da7ade05d4a2" Feb 19 10:25:54 crc kubenswrapper[4811]: E0219 10:25:54.806135 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38cf1a79ec564dd42009e15c34dafd4528299849362a2fe3b448da7ade05d4a2\": container with ID starting with 38cf1a79ec564dd42009e15c34dafd4528299849362a2fe3b448da7ade05d4a2 not found: ID does not exist" containerID="38cf1a79ec564dd42009e15c34dafd4528299849362a2fe3b448da7ade05d4a2" Feb 19 10:25:54 crc kubenswrapper[4811]: I0219 10:25:54.806225 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38cf1a79ec564dd42009e15c34dafd4528299849362a2fe3b448da7ade05d4a2"} err="failed to get container status \"38cf1a79ec564dd42009e15c34dafd4528299849362a2fe3b448da7ade05d4a2\": rpc error: code = NotFound desc = could not find container \"38cf1a79ec564dd42009e15c34dafd4528299849362a2fe3b448da7ade05d4a2\": container with ID starting with 38cf1a79ec564dd42009e15c34dafd4528299849362a2fe3b448da7ade05d4a2 not found: ID does not exist" Feb 19 10:25:54 crc kubenswrapper[4811]: I0219 10:25:54.806279 4811 scope.go:117] "RemoveContainer" containerID="7d4ac5f8fdd93d376f46ced842f11ca3dc632c4bcf2d1273f888067e0d25a836" Feb 19 10:25:54 crc kubenswrapper[4811]: I0219 10:25:54.905611 4811 scope.go:117] "RemoveContainer" containerID="e83336d53d9dc0632ac6d9b98104088592f3ac74b3146c72acb23fcc17910099" Feb 19 10:25:54 crc kubenswrapper[4811]: I0219 10:25:54.959028 4811 scope.go:117] "RemoveContainer" containerID="7d4ac5f8fdd93d376f46ced842f11ca3dc632c4bcf2d1273f888067e0d25a836" Feb 19 10:25:54 crc kubenswrapper[4811]: E0219 10:25:54.960840 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d4ac5f8fdd93d376f46ced842f11ca3dc632c4bcf2d1273f888067e0d25a836\": container with ID starting with 7d4ac5f8fdd93d376f46ced842f11ca3dc632c4bcf2d1273f888067e0d25a836 not found: ID does not exist" containerID="7d4ac5f8fdd93d376f46ced842f11ca3dc632c4bcf2d1273f888067e0d25a836" Feb 19 10:25:54 crc kubenswrapper[4811]: I0219 10:25:54.961911 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d4ac5f8fdd93d376f46ced842f11ca3dc632c4bcf2d1273f888067e0d25a836"} err="failed to get container status \"7d4ac5f8fdd93d376f46ced842f11ca3dc632c4bcf2d1273f888067e0d25a836\": rpc error: code = NotFound desc = could not find container \"7d4ac5f8fdd93d376f46ced842f11ca3dc632c4bcf2d1273f888067e0d25a836\": container with ID starting with 7d4ac5f8fdd93d376f46ced842f11ca3dc632c4bcf2d1273f888067e0d25a836 not found: ID does not exist" Feb 19 10:25:54 crc kubenswrapper[4811]: I0219 10:25:54.961950 4811 scope.go:117] "RemoveContainer" containerID="e83336d53d9dc0632ac6d9b98104088592f3ac74b3146c72acb23fcc17910099" Feb 19 10:25:54 crc kubenswrapper[4811]: E0219 10:25:54.963123 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e83336d53d9dc0632ac6d9b98104088592f3ac74b3146c72acb23fcc17910099\": container with ID starting with e83336d53d9dc0632ac6d9b98104088592f3ac74b3146c72acb23fcc17910099 not found: ID does not exist" containerID="e83336d53d9dc0632ac6d9b98104088592f3ac74b3146c72acb23fcc17910099" Feb 19 10:25:54 crc kubenswrapper[4811]: I0219 10:25:54.963152 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e83336d53d9dc0632ac6d9b98104088592f3ac74b3146c72acb23fcc17910099"} err="failed to get container status \"e83336d53d9dc0632ac6d9b98104088592f3ac74b3146c72acb23fcc17910099\": rpc error: code = NotFound desc = could not find container \"e83336d53d9dc0632ac6d9b98104088592f3ac74b3146c72acb23fcc17910099\": container with ID starting with e83336d53d9dc0632ac6d9b98104088592f3ac74b3146c72acb23fcc17910099 not found: ID does not exist" Feb 19 10:25:55 crc kubenswrapper[4811]: I0219 10:25:55.066398 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-6js64" event={"ID":"33876f22-ea5f-424b-8f1a-fb0058bf3384","Type":"ContainerStarted","Data":"1238ea54d0b03a05ae05ee6a7624c9014e0d6b4152522099169cdd41564d04c3"} Feb 19 10:25:55 crc kubenswrapper[4811]: I0219 10:25:55.066736 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-6js64" Feb 19 10:25:55 crc kubenswrapper[4811]: I0219 10:25:55.075133 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1a9b5380-d587-4d65-8ddb-b6c2faef624a","Type":"ContainerStarted","Data":"a1956c8df129c718f890776f729892e47b849e3b2be55942e2356cb1156f05a6"} Feb 19 10:25:55 crc kubenswrapper[4811]: I0219 10:25:55.079538 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-s74fv" event={"ID":"668b2133-abf2-41a1-a9ee-3edd030b23ac","Type":"ContainerStarted","Data":"7ba74f76aaf2309d3f2e317bdf1f3b2f2247a8f4c4cf19afbd6a80de4b1615b4"} Feb 19 10:25:55 crc kubenswrapper[4811]: I0219 10:25:55.080154 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-s74fv" Feb 19 10:25:55 crc kubenswrapper[4811]: I0219 10:25:55.097783 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-6js64" podStartSLOduration=4.097761305 podStartE2EDuration="4.097761305s" podCreationTimestamp="2026-02-19 10:25:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:25:55.091985168 +0000 UTC m=+1043.618449854" watchObservedRunningTime="2026-02-19 10:25:55.097761305 +0000 UTC m=+1043.624225991" Feb 19 10:25:55 crc kubenswrapper[4811]: I0219 10:25:55.123458 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=4.548518536 podStartE2EDuration="15.12342627s" podCreationTimestamp="2026-02-19 10:25:40 +0000 UTC" firstStartedPulling="2026-02-19 10:25:44.195161621 +0000 UTC m=+1032.721626307" lastFinishedPulling="2026-02-19 10:25:54.770069355 +0000 UTC m=+1043.296534041" observedRunningTime="2026-02-19 10:25:55.116486683 +0000 UTC m=+1043.642951369" watchObservedRunningTime="2026-02-19 10:25:55.12342627 +0000 UTC m=+1043.649890956" Feb 19 10:25:55 crc kubenswrapper[4811]: I0219 10:25:55.144577 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-s74fv" podStartSLOduration=4.144560599 podStartE2EDuration="4.144560599s" podCreationTimestamp="2026-02-19 10:25:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:25:55.142697782 +0000 UTC m=+1043.669162458" watchObservedRunningTime="2026-02-19 10:25:55.144560599 +0000 UTC m=+1043.671025285" Feb 19 10:25:56 crc kubenswrapper[4811]: I0219 10:25:56.090723 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bddvk" event={"ID":"a76ec0f0-9b42-4e48-bd3f-68a74161cc4c","Type":"ContainerStarted","Data":"1d79ec018dabd3e0794fb214205acf99216035bf5d1150a6fae60e317614efbd"} Feb 19 10:25:56 crc kubenswrapper[4811]: I0219 10:25:56.091021 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bddvk" event={"ID":"a76ec0f0-9b42-4e48-bd3f-68a74161cc4c","Type":"ContainerStarted","Data":"b7b24262f5ce320968bcd909f4cdf1ba7c970d0332dc5c074304877f088528c4"} Feb 19 10:25:56 crc kubenswrapper[4811]: I0219 10:25:56.091168 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-bddvk" Feb 19 10:25:56 crc kubenswrapper[4811]: I0219 10:25:56.091190 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-bddvk" Feb 19 10:25:56 crc kubenswrapper[4811]: I0219 10:25:56.092423 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2e3fed93-4b85-4e79-a9d3-795dbd7695ba","Type":"ContainerStarted","Data":"409b4d417742834cea01b03c356c3a624e4d682c9cf777189f357f9b65d41660"} Feb 19 10:25:56 crc kubenswrapper[4811]: I0219 10:25:56.093980 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-qwhjn" event={"ID":"764be2bf-8d0b-4783-afcd-de76bac7ce34","Type":"ContainerStarted","Data":"ba8d05a72d534989e69d0f5aaac21cffac15e644acddb62b58982c617aba5621"} Feb 19 10:25:56 crc kubenswrapper[4811]: I0219 10:25:56.119293 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-bddvk" podStartSLOduration=9.440836116 podStartE2EDuration="17.119272706s" podCreationTimestamp="2026-02-19 10:25:39 +0000 UTC" firstStartedPulling="2026-02-19 10:25:43.019831687 +0000 UTC m=+1031.546296373" lastFinishedPulling="2026-02-19 10:25:50.698268277 +0000 UTC m=+1039.224732963" observedRunningTime="2026-02-19 10:25:56.111612241 +0000 UTC m=+1044.638076927" watchObservedRunningTime="2026-02-19 10:25:56.119272706 +0000 UTC m=+1044.645737392" Feb 19 10:25:56 crc kubenswrapper[4811]: I0219 10:25:56.136034 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=6.296470016 podStartE2EDuration="18.136011583s" podCreationTimestamp="2026-02-19 10:25:38 +0000 UTC" firstStartedPulling="2026-02-19 10:25:42.955466145 +0000 UTC m=+1031.481930831" lastFinishedPulling="2026-02-19 10:25:54.795007712 +0000 UTC m=+1043.321472398" observedRunningTime="2026-02-19 10:25:56.133491819 +0000 UTC m=+1044.659956525" watchObservedRunningTime="2026-02-19 10:25:56.136011583 +0000 UTC m=+1044.662476289" Feb 19 10:25:56 crc kubenswrapper[4811]: I0219 10:25:56.155033 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-qwhjn" podStartSLOduration=2.861706242 podStartE2EDuration="5.155017568s" podCreationTimestamp="2026-02-19 10:25:51 +0000 UTC" firstStartedPulling="2026-02-19 10:25:52.543760269 +0000 UTC m=+1041.070224955" lastFinishedPulling="2026-02-19 10:25:54.837071595 +0000 UTC m=+1043.363536281" observedRunningTime="2026-02-19 10:25:56.152572036 +0000 UTC m=+1044.679036722" watchObservedRunningTime="2026-02-19 10:25:56.155017568 +0000 UTC m=+1044.681482254" Feb 19 10:25:56 crc kubenswrapper[4811]: I0219 10:25:56.787856 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:25:56 crc kubenswrapper[4811]: I0219 10:25:56.787977 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:25:56 crc kubenswrapper[4811]: I0219 10:25:56.788097 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" Feb 19 10:25:56 crc kubenswrapper[4811]: I0219 10:25:56.789294 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"861076d931739c868265cf3b64ebb02f51f96fc8b2fc1a6337148664ae662a28"} pod="openshift-machine-config-operator/machine-config-daemon-v97zm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:25:56 crc kubenswrapper[4811]: I0219 10:25:56.789371 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" containerID="cri-o://861076d931739c868265cf3b64ebb02f51f96fc8b2fc1a6337148664ae662a28" gracePeriod=600 Feb 19 10:25:57 crc kubenswrapper[4811]: I0219 10:25:57.102342 4811 generic.go:334] "Generic (PLEG): container finished" podID="354e40bc-0a2b-4654-873c-30b6293d6192" containerID="37ce36ed8230c8686e8c3a4a494ee24bbb43e389fe5c8b3449a1602efd81ab53" exitCode=0 Feb 19 10:25:57 crc kubenswrapper[4811]: I0219 10:25:57.102435 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"354e40bc-0a2b-4654-873c-30b6293d6192","Type":"ContainerDied","Data":"37ce36ed8230c8686e8c3a4a494ee24bbb43e389fe5c8b3449a1602efd81ab53"} Feb 19 10:25:57 crc kubenswrapper[4811]: I0219 10:25:57.104882 4811 generic.go:334] "Generic (PLEG): container finished" podID="e2e42c80-bece-4066-abad-05bc661d33f5" containerID="861076d931739c868265cf3b64ebb02f51f96fc8b2fc1a6337148664ae662a28" exitCode=0 Feb 19 10:25:57 crc kubenswrapper[4811]: I0219 10:25:57.104946 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" event={"ID":"e2e42c80-bece-4066-abad-05bc661d33f5","Type":"ContainerDied","Data":"861076d931739c868265cf3b64ebb02f51f96fc8b2fc1a6337148664ae662a28"} Feb 19 10:25:57 crc kubenswrapper[4811]: I0219 10:25:57.104977 4811 scope.go:117] "RemoveContainer" containerID="193ac12085ccacd5c18931cdeb481fcee9513ec65e0023789373cfc6e16223d8" Feb 19 10:25:57 crc kubenswrapper[4811]: I0219 10:25:57.106987 4811 generic.go:334] "Generic (PLEG): container finished" podID="ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6" containerID="c50d6fce696dc9d97341aca26331296ca144977e1c5741a9ec05a490a514e8b6" exitCode=0 Feb 19 10:25:57 crc kubenswrapper[4811]: I0219 10:25:57.107044 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6","Type":"ContainerDied","Data":"c50d6fce696dc9d97341aca26331296ca144977e1c5741a9ec05a490a514e8b6"} Feb 19 10:25:57 crc kubenswrapper[4811]: I0219 10:25:57.178625 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 19 10:25:57 crc kubenswrapper[4811]: I0219 10:25:57.179016 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 19 10:25:57 crc kubenswrapper[4811]: I0219 10:25:57.223864 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 19 10:25:57 crc kubenswrapper[4811]: I0219 10:25:57.836471 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 19 10:25:57 crc kubenswrapper[4811]: I0219 10:25:57.880906 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.118971 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"354e40bc-0a2b-4654-873c-30b6293d6192","Type":"ContainerStarted","Data":"fe4780e8e99b9210aa8eeaa42b4ffa50132ceccd5f54f4b8b2a908196cd4271c"} Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.122180 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" event={"ID":"e2e42c80-bece-4066-abad-05bc661d33f5","Type":"ContainerStarted","Data":"55d19b42477425a3b99b473164b0990b42ac48a5245631a7eb91475e7635a649"} Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.124418 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6","Type":"ContainerStarted","Data":"66aff534ac72366c3cb26321711d521b3116c2b65bbc5ca4fc4d1ba62b432128"} Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.125086 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.144166 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=19.224940272 podStartE2EDuration="27.144140934s" podCreationTimestamp="2026-02-19 10:25:31 +0000 UTC" firstStartedPulling="2026-02-19 10:25:42.939585299 +0000 UTC m=+1031.466049985" lastFinishedPulling="2026-02-19 10:25:50.858785941 +0000 UTC m=+1039.385250647" observedRunningTime="2026-02-19 10:25:58.140882141 +0000 UTC m=+1046.667346827" watchObservedRunningTime="2026-02-19 10:25:58.144140934 +0000 UTC m=+1046.670605620" Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.168405 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=19.928672253 podStartE2EDuration="28.168386652s" podCreationTimestamp="2026-02-19 10:25:30 +0000 UTC" firstStartedPulling="2026-02-19 10:25:42.619307789 +0000 UTC m=+1031.145772475" lastFinishedPulling="2026-02-19 10:25:50.859022188 +0000 UTC m=+1039.385486874" observedRunningTime="2026-02-19 10:25:58.166754321 +0000 UTC m=+1046.693219057" watchObservedRunningTime="2026-02-19 10:25:58.168386652 +0000 UTC m=+1046.694851338" Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.174539 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.177806 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.400748 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.467198 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 19 10:25:58 crc kubenswrapper[4811]: E0219 10:25:58.467761 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56394562-c333-4096-b46f-9f8b0313c765" containerName="dnsmasq-dns" Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.467787 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="56394562-c333-4096-b46f-9f8b0313c765" containerName="dnsmasq-dns" Feb 19 10:25:58 crc kubenswrapper[4811]: E0219 10:25:58.467824 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96d7a7f6-ff94-4361-b748-52474061017e" containerName="init" Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.467831 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="96d7a7f6-ff94-4361-b748-52474061017e" containerName="init" Feb 19 10:25:58 crc kubenswrapper[4811]: E0219 10:25:58.467853 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96d7a7f6-ff94-4361-b748-52474061017e" containerName="dnsmasq-dns" Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.467860 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="96d7a7f6-ff94-4361-b748-52474061017e" containerName="dnsmasq-dns" Feb 19 10:25:58 crc kubenswrapper[4811]: E0219 10:25:58.467876 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56394562-c333-4096-b46f-9f8b0313c765" containerName="init" Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.467882 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="56394562-c333-4096-b46f-9f8b0313c765" containerName="init" Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.468044 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="96d7a7f6-ff94-4361-b748-52474061017e" containerName="dnsmasq-dns" Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.468064 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="56394562-c333-4096-b46f-9f8b0313c765" containerName="dnsmasq-dns" Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.468942 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.470399 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.470842 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.471817 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.475106 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-twhz4" Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.492674 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.624310 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/195bcb34-7bf7-45d0-92d4-0fb83788222c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"195bcb34-7bf7-45d0-92d4-0fb83788222c\") " pod="openstack/ovn-northd-0" Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.624368 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/195bcb34-7bf7-45d0-92d4-0fb83788222c-config\") pod \"ovn-northd-0\" (UID: \"195bcb34-7bf7-45d0-92d4-0fb83788222c\") " pod="openstack/ovn-northd-0" Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.624417 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/195bcb34-7bf7-45d0-92d4-0fb83788222c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"195bcb34-7bf7-45d0-92d4-0fb83788222c\") " pod="openstack/ovn-northd-0" Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.624652 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/195bcb34-7bf7-45d0-92d4-0fb83788222c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"195bcb34-7bf7-45d0-92d4-0fb83788222c\") " pod="openstack/ovn-northd-0" Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.624895 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/195bcb34-7bf7-45d0-92d4-0fb83788222c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"195bcb34-7bf7-45d0-92d4-0fb83788222c\") " pod="openstack/ovn-northd-0" Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.624929 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/195bcb34-7bf7-45d0-92d4-0fb83788222c-scripts\") pod \"ovn-northd-0\" (UID: \"195bcb34-7bf7-45d0-92d4-0fb83788222c\") " pod="openstack/ovn-northd-0" Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.624946 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqcdf\" (UniqueName: \"kubernetes.io/projected/195bcb34-7bf7-45d0-92d4-0fb83788222c-kube-api-access-rqcdf\") pod \"ovn-northd-0\" (UID: \"195bcb34-7bf7-45d0-92d4-0fb83788222c\") " pod="openstack/ovn-northd-0" Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.726606 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/195bcb34-7bf7-45d0-92d4-0fb83788222c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"195bcb34-7bf7-45d0-92d4-0fb83788222c\") " pod="openstack/ovn-northd-0" Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.727021 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/195bcb34-7bf7-45d0-92d4-0fb83788222c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"195bcb34-7bf7-45d0-92d4-0fb83788222c\") " pod="openstack/ovn-northd-0" Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.727116 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/195bcb34-7bf7-45d0-92d4-0fb83788222c-scripts\") pod \"ovn-northd-0\" (UID: \"195bcb34-7bf7-45d0-92d4-0fb83788222c\") " pod="openstack/ovn-northd-0" Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.727184 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqcdf\" (UniqueName: \"kubernetes.io/projected/195bcb34-7bf7-45d0-92d4-0fb83788222c-kube-api-access-rqcdf\") pod \"ovn-northd-0\" (UID: \"195bcb34-7bf7-45d0-92d4-0fb83788222c\") " pod="openstack/ovn-northd-0" Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.727291 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/195bcb34-7bf7-45d0-92d4-0fb83788222c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"195bcb34-7bf7-45d0-92d4-0fb83788222c\") " pod="openstack/ovn-northd-0" Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.727374 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/195bcb34-7bf7-45d0-92d4-0fb83788222c-config\") pod \"ovn-northd-0\" (UID: \"195bcb34-7bf7-45d0-92d4-0fb83788222c\") " pod="openstack/ovn-northd-0" Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.727463 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/195bcb34-7bf7-45d0-92d4-0fb83788222c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"195bcb34-7bf7-45d0-92d4-0fb83788222c\") " pod="openstack/ovn-northd-0" Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.727913 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/195bcb34-7bf7-45d0-92d4-0fb83788222c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"195bcb34-7bf7-45d0-92d4-0fb83788222c\") " pod="openstack/ovn-northd-0" Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.728413 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/195bcb34-7bf7-45d0-92d4-0fb83788222c-scripts\") pod \"ovn-northd-0\" (UID: \"195bcb34-7bf7-45d0-92d4-0fb83788222c\") " pod="openstack/ovn-northd-0" Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.728416 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/195bcb34-7bf7-45d0-92d4-0fb83788222c-config\") pod \"ovn-northd-0\" (UID: \"195bcb34-7bf7-45d0-92d4-0fb83788222c\") " pod="openstack/ovn-northd-0" Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.734291 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/195bcb34-7bf7-45d0-92d4-0fb83788222c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"195bcb34-7bf7-45d0-92d4-0fb83788222c\") " pod="openstack/ovn-northd-0" Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.735380 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/195bcb34-7bf7-45d0-92d4-0fb83788222c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"195bcb34-7bf7-45d0-92d4-0fb83788222c\") " pod="openstack/ovn-northd-0" Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.736545 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/195bcb34-7bf7-45d0-92d4-0fb83788222c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"195bcb34-7bf7-45d0-92d4-0fb83788222c\") " pod="openstack/ovn-northd-0" Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.747340 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqcdf\" (UniqueName: \"kubernetes.io/projected/195bcb34-7bf7-45d0-92d4-0fb83788222c-kube-api-access-rqcdf\") pod \"ovn-northd-0\" (UID: \"195bcb34-7bf7-45d0-92d4-0fb83788222c\") " pod="openstack/ovn-northd-0" Feb 19 10:25:58 crc kubenswrapper[4811]: I0219 10:25:58.794610 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 19 10:25:59 crc kubenswrapper[4811]: I0219 10:25:59.261126 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 10:25:59 crc kubenswrapper[4811]: W0219 10:25:59.265594 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod195bcb34_7bf7_45d0_92d4_0fb83788222c.slice/crio-d413a4885f974fa89ac326a6016d6d9e7794ef095399d072187b22ce2e9eeb3b WatchSource:0}: Error finding container d413a4885f974fa89ac326a6016d6d9e7794ef095399d072187b22ce2e9eeb3b: Status 404 returned error can't find the container with id d413a4885f974fa89ac326a6016d6d9e7794ef095399d072187b22ce2e9eeb3b Feb 19 10:26:00 crc kubenswrapper[4811]: I0219 10:26:00.141873 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"195bcb34-7bf7-45d0-92d4-0fb83788222c","Type":"ContainerStarted","Data":"d413a4885f974fa89ac326a6016d6d9e7794ef095399d072187b22ce2e9eeb3b"} Feb 19 10:26:01 crc kubenswrapper[4811]: I0219 10:26:01.151221 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"195bcb34-7bf7-45d0-92d4-0fb83788222c","Type":"ContainerStarted","Data":"89fa00f61e70b144073de277778c541132c285f57d9cfe41cc96cde6ee9b0263"} Feb 19 10:26:01 crc kubenswrapper[4811]: I0219 10:26:01.151617 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"195bcb34-7bf7-45d0-92d4-0fb83788222c","Type":"ContainerStarted","Data":"c497c3fbfc08ddb4959e08ab403df6bee16186b46c21287ee9b3175996aca035"} Feb 19 10:26:01 crc kubenswrapper[4811]: I0219 10:26:01.152607 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 19 10:26:01 crc kubenswrapper[4811]: I0219 10:26:01.179727 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.00235706 podStartE2EDuration="3.179709037s" podCreationTimestamp="2026-02-19 10:25:58 +0000 UTC" firstStartedPulling="2026-02-19 10:25:59.268017476 +0000 UTC m=+1047.794482162" lastFinishedPulling="2026-02-19 10:26:00.445369443 +0000 UTC m=+1048.971834139" observedRunningTime="2026-02-19 10:26:01.170553064 +0000 UTC m=+1049.697017770" watchObservedRunningTime="2026-02-19 10:26:01.179709037 +0000 UTC m=+1049.706173723" Feb 19 10:26:01 crc kubenswrapper[4811]: I0219 10:26:01.820610 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-s74fv" Feb 19 10:26:01 crc kubenswrapper[4811]: I0219 10:26:01.866826 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 19 10:26:01 crc kubenswrapper[4811]: I0219 10:26:01.867112 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 19 10:26:01 crc kubenswrapper[4811]: I0219 10:26:01.964653 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-6js64" Feb 19 10:26:01 crc kubenswrapper[4811]: I0219 10:26:01.971073 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 19 10:26:02 crc kubenswrapper[4811]: I0219 10:26:02.038024 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-s74fv"] Feb 19 10:26:02 crc kubenswrapper[4811]: I0219 10:26:02.159137 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-s74fv" podUID="668b2133-abf2-41a1-a9ee-3edd030b23ac" containerName="dnsmasq-dns" containerID="cri-o://7ba74f76aaf2309d3f2e317bdf1f3b2f2247a8f4c4cf19afbd6a80de4b1615b4" gracePeriod=10 Feb 19 10:26:02 crc kubenswrapper[4811]: I0219 10:26:02.258101 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 19 10:26:02 crc kubenswrapper[4811]: I0219 10:26:02.688000 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-s74fv" Feb 19 10:26:02 crc kubenswrapper[4811]: I0219 10:26:02.809882 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/668b2133-abf2-41a1-a9ee-3edd030b23ac-config\") pod \"668b2133-abf2-41a1-a9ee-3edd030b23ac\" (UID: \"668b2133-abf2-41a1-a9ee-3edd030b23ac\") " Feb 19 10:26:02 crc kubenswrapper[4811]: I0219 10:26:02.810712 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/668b2133-abf2-41a1-a9ee-3edd030b23ac-dns-svc\") pod \"668b2133-abf2-41a1-a9ee-3edd030b23ac\" (UID: \"668b2133-abf2-41a1-a9ee-3edd030b23ac\") " Feb 19 10:26:02 crc kubenswrapper[4811]: I0219 10:26:02.810950 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/668b2133-abf2-41a1-a9ee-3edd030b23ac-ovsdbserver-nb\") pod \"668b2133-abf2-41a1-a9ee-3edd030b23ac\" (UID: \"668b2133-abf2-41a1-a9ee-3edd030b23ac\") " Feb 19 10:26:02 crc kubenswrapper[4811]: I0219 10:26:02.810990 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcwnd\" (UniqueName: \"kubernetes.io/projected/668b2133-abf2-41a1-a9ee-3edd030b23ac-kube-api-access-mcwnd\") pod \"668b2133-abf2-41a1-a9ee-3edd030b23ac\" (UID: \"668b2133-abf2-41a1-a9ee-3edd030b23ac\") " Feb 19 10:26:02 crc kubenswrapper[4811]: I0219 10:26:02.818824 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/668b2133-abf2-41a1-a9ee-3edd030b23ac-kube-api-access-mcwnd" (OuterVolumeSpecName: "kube-api-access-mcwnd") pod "668b2133-abf2-41a1-a9ee-3edd030b23ac" (UID: "668b2133-abf2-41a1-a9ee-3edd030b23ac"). InnerVolumeSpecName "kube-api-access-mcwnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:26:02 crc kubenswrapper[4811]: I0219 10:26:02.856336 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/668b2133-abf2-41a1-a9ee-3edd030b23ac-config" (OuterVolumeSpecName: "config") pod "668b2133-abf2-41a1-a9ee-3edd030b23ac" (UID: "668b2133-abf2-41a1-a9ee-3edd030b23ac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:26:02 crc kubenswrapper[4811]: I0219 10:26:02.856542 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/668b2133-abf2-41a1-a9ee-3edd030b23ac-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "668b2133-abf2-41a1-a9ee-3edd030b23ac" (UID: "668b2133-abf2-41a1-a9ee-3edd030b23ac"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:26:02 crc kubenswrapper[4811]: I0219 10:26:02.857273 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/668b2133-abf2-41a1-a9ee-3edd030b23ac-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "668b2133-abf2-41a1-a9ee-3edd030b23ac" (UID: "668b2133-abf2-41a1-a9ee-3edd030b23ac"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:26:02 crc kubenswrapper[4811]: I0219 10:26:02.913053 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/668b2133-abf2-41a1-a9ee-3edd030b23ac-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:02 crc kubenswrapper[4811]: I0219 10:26:02.913103 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcwnd\" (UniqueName: \"kubernetes.io/projected/668b2133-abf2-41a1-a9ee-3edd030b23ac-kube-api-access-mcwnd\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:02 crc kubenswrapper[4811]: I0219 10:26:02.913134 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/668b2133-abf2-41a1-a9ee-3edd030b23ac-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:02 crc kubenswrapper[4811]: I0219 10:26:02.913143 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/668b2133-abf2-41a1-a9ee-3edd030b23ac-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:03 crc kubenswrapper[4811]: I0219 10:26:03.046417 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 19 10:26:03 crc kubenswrapper[4811]: I0219 10:26:03.046558 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 19 10:26:03 crc kubenswrapper[4811]: I0219 10:26:03.169151 4811 generic.go:334] "Generic (PLEG): container finished" podID="668b2133-abf2-41a1-a9ee-3edd030b23ac" containerID="7ba74f76aaf2309d3f2e317bdf1f3b2f2247a8f4c4cf19afbd6a80de4b1615b4" exitCode=0 Feb 19 10:26:03 crc kubenswrapper[4811]: I0219 10:26:03.170007 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-s74fv" Feb 19 10:26:03 crc kubenswrapper[4811]: I0219 10:26:03.172657 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-s74fv" event={"ID":"668b2133-abf2-41a1-a9ee-3edd030b23ac","Type":"ContainerDied","Data":"7ba74f76aaf2309d3f2e317bdf1f3b2f2247a8f4c4cf19afbd6a80de4b1615b4"} Feb 19 10:26:03 crc kubenswrapper[4811]: I0219 10:26:03.172740 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-s74fv" event={"ID":"668b2133-abf2-41a1-a9ee-3edd030b23ac","Type":"ContainerDied","Data":"0e3b9595b8272af16f3f9bbf85fd649f46ddd7a9ccb2041f3065f8f9a6db3b43"} Feb 19 10:26:03 crc kubenswrapper[4811]: I0219 10:26:03.172776 4811 scope.go:117] "RemoveContainer" containerID="7ba74f76aaf2309d3f2e317bdf1f3b2f2247a8f4c4cf19afbd6a80de4b1615b4" Feb 19 10:26:03 crc kubenswrapper[4811]: I0219 10:26:03.208603 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-s74fv"] Feb 19 10:26:03 crc kubenswrapper[4811]: I0219 10:26:03.211405 4811 scope.go:117] "RemoveContainer" containerID="5c20456bd0c71501dc352c84bb24e5d28078d4c1826a439039c1221124d98725" Feb 19 10:26:03 crc kubenswrapper[4811]: I0219 10:26:03.212834 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-s74fv"] Feb 19 10:26:03 crc kubenswrapper[4811]: I0219 10:26:03.244164 4811 scope.go:117] "RemoveContainer" containerID="7ba74f76aaf2309d3f2e317bdf1f3b2f2247a8f4c4cf19afbd6a80de4b1615b4" Feb 19 10:26:03 crc kubenswrapper[4811]: E0219 10:26:03.247626 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ba74f76aaf2309d3f2e317bdf1f3b2f2247a8f4c4cf19afbd6a80de4b1615b4\": container with ID starting with 7ba74f76aaf2309d3f2e317bdf1f3b2f2247a8f4c4cf19afbd6a80de4b1615b4 not found: ID does not exist" containerID="7ba74f76aaf2309d3f2e317bdf1f3b2f2247a8f4c4cf19afbd6a80de4b1615b4" Feb 19 10:26:03 crc kubenswrapper[4811]: I0219 10:26:03.247714 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ba74f76aaf2309d3f2e317bdf1f3b2f2247a8f4c4cf19afbd6a80de4b1615b4"} err="failed to get container status \"7ba74f76aaf2309d3f2e317bdf1f3b2f2247a8f4c4cf19afbd6a80de4b1615b4\": rpc error: code = NotFound desc = could not find container \"7ba74f76aaf2309d3f2e317bdf1f3b2f2247a8f4c4cf19afbd6a80de4b1615b4\": container with ID starting with 7ba74f76aaf2309d3f2e317bdf1f3b2f2247a8f4c4cf19afbd6a80de4b1615b4 not found: ID does not exist" Feb 19 10:26:03 crc kubenswrapper[4811]: I0219 10:26:03.247760 4811 scope.go:117] "RemoveContainer" containerID="5c20456bd0c71501dc352c84bb24e5d28078d4c1826a439039c1221124d98725" Feb 19 10:26:03 crc kubenswrapper[4811]: E0219 10:26:03.251163 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c20456bd0c71501dc352c84bb24e5d28078d4c1826a439039c1221124d98725\": container with ID starting with 5c20456bd0c71501dc352c84bb24e5d28078d4c1826a439039c1221124d98725 not found: ID does not exist" containerID="5c20456bd0c71501dc352c84bb24e5d28078d4c1826a439039c1221124d98725" Feb 19 10:26:03 crc kubenswrapper[4811]: I0219 10:26:03.255110 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c20456bd0c71501dc352c84bb24e5d28078d4c1826a439039c1221124d98725"} err="failed to get container status \"5c20456bd0c71501dc352c84bb24e5d28078d4c1826a439039c1221124d98725\": rpc error: code = NotFound desc = could not find container \"5c20456bd0c71501dc352c84bb24e5d28078d4c1826a439039c1221124d98725\": container with ID starting with 5c20456bd0c71501dc352c84bb24e5d28078d4c1826a439039c1221124d98725 not found: ID does not exist" Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.499929 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-hwhjb"] Feb 19 10:26:04 crc kubenswrapper[4811]: E0219 10:26:04.500755 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="668b2133-abf2-41a1-a9ee-3edd030b23ac" containerName="init" Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.500771 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="668b2133-abf2-41a1-a9ee-3edd030b23ac" containerName="init" Feb 19 10:26:04 crc kubenswrapper[4811]: E0219 10:26:04.500784 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="668b2133-abf2-41a1-a9ee-3edd030b23ac" containerName="dnsmasq-dns" Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.500791 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="668b2133-abf2-41a1-a9ee-3edd030b23ac" containerName="dnsmasq-dns" Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.500974 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="668b2133-abf2-41a1-a9ee-3edd030b23ac" containerName="dnsmasq-dns" Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.501665 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-hwhjb" Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.508028 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-e501-account-create-update-trk4v"] Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.508988 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e501-account-create-update-trk4v" Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.521794 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e501-account-create-update-trk4v"] Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.521852 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-hwhjb"] Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.547179 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.606256 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="668b2133-abf2-41a1-a9ee-3edd030b23ac" path="/var/lib/kubelet/pods/668b2133-abf2-41a1-a9ee-3edd030b23ac/volumes" Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.610204 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-dsmbw"] Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.611533 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-dsmbw" Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.617154 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-dsmbw"] Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.649840 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8853f172-601e-4f67-96d3-5b910eba6c9a-operator-scripts\") pod \"keystone-db-create-hwhjb\" (UID: \"8853f172-601e-4f67-96d3-5b910eba6c9a\") " pod="openstack/keystone-db-create-hwhjb" Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.649911 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vssh\" (UniqueName: \"kubernetes.io/projected/8853f172-601e-4f67-96d3-5b910eba6c9a-kube-api-access-8vssh\") pod \"keystone-db-create-hwhjb\" (UID: \"8853f172-601e-4f67-96d3-5b910eba6c9a\") " pod="openstack/keystone-db-create-hwhjb" Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.650431 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b6vg\" (UniqueName: \"kubernetes.io/projected/ac222fb2-5295-4eb7-8611-f2a7157f6176-kube-api-access-8b6vg\") pod \"keystone-e501-account-create-update-trk4v\" (UID: \"ac222fb2-5295-4eb7-8611-f2a7157f6176\") " pod="openstack/keystone-e501-account-create-update-trk4v" Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.651292 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac222fb2-5295-4eb7-8611-f2a7157f6176-operator-scripts\") pod \"keystone-e501-account-create-update-trk4v\" (UID: \"ac222fb2-5295-4eb7-8611-f2a7157f6176\") " pod="openstack/keystone-e501-account-create-update-trk4v" Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.700396 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-407c-account-create-update-pn24j"] Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.701942 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-407c-account-create-update-pn24j" Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.708272 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.713045 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-407c-account-create-update-pn24j"] Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.752659 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b6vg\" (UniqueName: \"kubernetes.io/projected/ac222fb2-5295-4eb7-8611-f2a7157f6176-kube-api-access-8b6vg\") pod \"keystone-e501-account-create-update-trk4v\" (UID: \"ac222fb2-5295-4eb7-8611-f2a7157f6176\") " pod="openstack/keystone-e501-account-create-update-trk4v" Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.752731 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2a34c54-3b76-42e5-b4a3-37a238ad1947-operator-scripts\") pod \"placement-db-create-dsmbw\" (UID: \"d2a34c54-3b76-42e5-b4a3-37a238ad1947\") " pod="openstack/placement-db-create-dsmbw" Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.752788 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac222fb2-5295-4eb7-8611-f2a7157f6176-operator-scripts\") pod \"keystone-e501-account-create-update-trk4v\" (UID: \"ac222fb2-5295-4eb7-8611-f2a7157f6176\") " pod="openstack/keystone-e501-account-create-update-trk4v" Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.752875 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8853f172-601e-4f67-96d3-5b910eba6c9a-operator-scripts\") pod \"keystone-db-create-hwhjb\" (UID: \"8853f172-601e-4f67-96d3-5b910eba6c9a\") " pod="openstack/keystone-db-create-hwhjb" Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.752930 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqzk6\" (UniqueName: \"kubernetes.io/projected/d2a34c54-3b76-42e5-b4a3-37a238ad1947-kube-api-access-rqzk6\") pod \"placement-db-create-dsmbw\" (UID: \"d2a34c54-3b76-42e5-b4a3-37a238ad1947\") " pod="openstack/placement-db-create-dsmbw" Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.752967 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vssh\" (UniqueName: \"kubernetes.io/projected/8853f172-601e-4f67-96d3-5b910eba6c9a-kube-api-access-8vssh\") pod \"keystone-db-create-hwhjb\" (UID: \"8853f172-601e-4f67-96d3-5b910eba6c9a\") " pod="openstack/keystone-db-create-hwhjb" Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.754107 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac222fb2-5295-4eb7-8611-f2a7157f6176-operator-scripts\") pod \"keystone-e501-account-create-update-trk4v\" (UID: \"ac222fb2-5295-4eb7-8611-f2a7157f6176\") " pod="openstack/keystone-e501-account-create-update-trk4v" Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.754121 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8853f172-601e-4f67-96d3-5b910eba6c9a-operator-scripts\") pod \"keystone-db-create-hwhjb\" (UID: \"8853f172-601e-4f67-96d3-5b910eba6c9a\") " pod="openstack/keystone-db-create-hwhjb" Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.771376 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vssh\" (UniqueName: \"kubernetes.io/projected/8853f172-601e-4f67-96d3-5b910eba6c9a-kube-api-access-8vssh\") pod \"keystone-db-create-hwhjb\" (UID: \"8853f172-601e-4f67-96d3-5b910eba6c9a\") " pod="openstack/keystone-db-create-hwhjb" Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.772291 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b6vg\" (UniqueName: \"kubernetes.io/projected/ac222fb2-5295-4eb7-8611-f2a7157f6176-kube-api-access-8b6vg\") pod \"keystone-e501-account-create-update-trk4v\" (UID: \"ac222fb2-5295-4eb7-8611-f2a7157f6176\") " pod="openstack/keystone-e501-account-create-update-trk4v" Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.854649 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2a34c54-3b76-42e5-b4a3-37a238ad1947-operator-scripts\") pod \"placement-db-create-dsmbw\" (UID: \"d2a34c54-3b76-42e5-b4a3-37a238ad1947\") " pod="openstack/placement-db-create-dsmbw" Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.854743 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2001e70d-4b7a-4c4d-8ed3-54cedaea54ea-operator-scripts\") pod \"placement-407c-account-create-update-pn24j\" (UID: \"2001e70d-4b7a-4c4d-8ed3-54cedaea54ea\") " pod="openstack/placement-407c-account-create-update-pn24j" Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.854797 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqzk6\" (UniqueName: \"kubernetes.io/projected/d2a34c54-3b76-42e5-b4a3-37a238ad1947-kube-api-access-rqzk6\") pod \"placement-db-create-dsmbw\" (UID: \"d2a34c54-3b76-42e5-b4a3-37a238ad1947\") " pod="openstack/placement-db-create-dsmbw" Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.854823 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kshh\" (UniqueName: \"kubernetes.io/projected/2001e70d-4b7a-4c4d-8ed3-54cedaea54ea-kube-api-access-9kshh\") pod \"placement-407c-account-create-update-pn24j\" (UID: \"2001e70d-4b7a-4c4d-8ed3-54cedaea54ea\") " pod="openstack/placement-407c-account-create-update-pn24j" Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.855808 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2a34c54-3b76-42e5-b4a3-37a238ad1947-operator-scripts\") pod \"placement-db-create-dsmbw\" (UID: \"d2a34c54-3b76-42e5-b4a3-37a238ad1947\") " pod="openstack/placement-db-create-dsmbw" Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.871604 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqzk6\" (UniqueName: \"kubernetes.io/projected/d2a34c54-3b76-42e5-b4a3-37a238ad1947-kube-api-access-rqzk6\") pod \"placement-db-create-dsmbw\" (UID: \"d2a34c54-3b76-42e5-b4a3-37a238ad1947\") " pod="openstack/placement-db-create-dsmbw" Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.878969 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-hwhjb" Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.902496 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e501-account-create-update-trk4v" Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.957143 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2001e70d-4b7a-4c4d-8ed3-54cedaea54ea-operator-scripts\") pod \"placement-407c-account-create-update-pn24j\" (UID: \"2001e70d-4b7a-4c4d-8ed3-54cedaea54ea\") " pod="openstack/placement-407c-account-create-update-pn24j" Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.957519 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kshh\" (UniqueName: \"kubernetes.io/projected/2001e70d-4b7a-4c4d-8ed3-54cedaea54ea-kube-api-access-9kshh\") pod \"placement-407c-account-create-update-pn24j\" (UID: \"2001e70d-4b7a-4c4d-8ed3-54cedaea54ea\") " pod="openstack/placement-407c-account-create-update-pn24j" Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.958084 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-dsmbw" Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.958677 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2001e70d-4b7a-4c4d-8ed3-54cedaea54ea-operator-scripts\") pod \"placement-407c-account-create-update-pn24j\" (UID: \"2001e70d-4b7a-4c4d-8ed3-54cedaea54ea\") " pod="openstack/placement-407c-account-create-update-pn24j" Feb 19 10:26:04 crc kubenswrapper[4811]: I0219 10:26:04.979888 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kshh\" (UniqueName: \"kubernetes.io/projected/2001e70d-4b7a-4c4d-8ed3-54cedaea54ea-kube-api-access-9kshh\") pod \"placement-407c-account-create-update-pn24j\" (UID: \"2001e70d-4b7a-4c4d-8ed3-54cedaea54ea\") " pod="openstack/placement-407c-account-create-update-pn24j" Feb 19 10:26:05 crc kubenswrapper[4811]: I0219 10:26:05.023542 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-407c-account-create-update-pn24j" Feb 19 10:26:05 crc kubenswrapper[4811]: I0219 10:26:05.389030 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 19 10:26:05 crc kubenswrapper[4811]: I0219 10:26:05.392097 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-hwhjb"] Feb 19 10:26:05 crc kubenswrapper[4811]: W0219 10:26:05.403175 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8853f172_601e_4f67_96d3_5b910eba6c9a.slice/crio-cc1aacb7a9c58f241d82b4179b63cf99f1cb7641dcc507dfd72ec8d6ba661248 WatchSource:0}: Error finding container cc1aacb7a9c58f241d82b4179b63cf99f1cb7641dcc507dfd72ec8d6ba661248: Status 404 returned error can't find the container with id cc1aacb7a9c58f241d82b4179b63cf99f1cb7641dcc507dfd72ec8d6ba661248 Feb 19 10:26:05 crc kubenswrapper[4811]: I0219 10:26:05.480597 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-dsmbw"] Feb 19 10:26:05 crc kubenswrapper[4811]: I0219 10:26:05.489328 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e501-account-create-update-trk4v"] Feb 19 10:26:05 crc kubenswrapper[4811]: W0219 10:26:05.493318 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2a34c54_3b76_42e5_b4a3_37a238ad1947.slice/crio-ee24ad4fafd7a076bfad40fc0ab6dad7421549fd27cd8c3f49a2bd087c15f2c6 WatchSource:0}: Error finding container ee24ad4fafd7a076bfad40fc0ab6dad7421549fd27cd8c3f49a2bd087c15f2c6: Status 404 returned error can't find the container with id ee24ad4fafd7a076bfad40fc0ab6dad7421549fd27cd8c3f49a2bd087c15f2c6 Feb 19 10:26:05 crc kubenswrapper[4811]: I0219 10:26:05.496566 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 19 10:26:05 crc kubenswrapper[4811]: I0219 10:26:05.640612 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-407c-account-create-update-pn24j"] Feb 19 10:26:05 crc kubenswrapper[4811]: I0219 10:26:05.801864 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 19 10:26:05 crc kubenswrapper[4811]: I0219 10:26:05.807839 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-7qmw4"] Feb 19 10:26:05 crc kubenswrapper[4811]: I0219 10:26:05.809479 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-7qmw4" Feb 19 10:26:05 crc kubenswrapper[4811]: I0219 10:26:05.830664 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7qmw4"] Feb 19 10:26:05 crc kubenswrapper[4811]: I0219 10:26:05.985039 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f02bee8-3f33-4fff-9d32-5bbd8463e842-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-7qmw4\" (UID: \"5f02bee8-3f33-4fff-9d32-5bbd8463e842\") " pod="openstack/dnsmasq-dns-698758b865-7qmw4" Feb 19 10:26:05 crc kubenswrapper[4811]: I0219 10:26:05.985722 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6hxn\" (UniqueName: \"kubernetes.io/projected/5f02bee8-3f33-4fff-9d32-5bbd8463e842-kube-api-access-v6hxn\") pod \"dnsmasq-dns-698758b865-7qmw4\" (UID: \"5f02bee8-3f33-4fff-9d32-5bbd8463e842\") " pod="openstack/dnsmasq-dns-698758b865-7qmw4" Feb 19 10:26:05 crc kubenswrapper[4811]: I0219 10:26:05.985799 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f02bee8-3f33-4fff-9d32-5bbd8463e842-config\") pod \"dnsmasq-dns-698758b865-7qmw4\" (UID: \"5f02bee8-3f33-4fff-9d32-5bbd8463e842\") " pod="openstack/dnsmasq-dns-698758b865-7qmw4" Feb 19 10:26:05 crc kubenswrapper[4811]: I0219 10:26:05.985848 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f02bee8-3f33-4fff-9d32-5bbd8463e842-dns-svc\") pod \"dnsmasq-dns-698758b865-7qmw4\" (UID: \"5f02bee8-3f33-4fff-9d32-5bbd8463e842\") " pod="openstack/dnsmasq-dns-698758b865-7qmw4" Feb 19 10:26:05 crc kubenswrapper[4811]: I0219 10:26:05.985897 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f02bee8-3f33-4fff-9d32-5bbd8463e842-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-7qmw4\" (UID: \"5f02bee8-3f33-4fff-9d32-5bbd8463e842\") " pod="openstack/dnsmasq-dns-698758b865-7qmw4" Feb 19 10:26:06 crc kubenswrapper[4811]: I0219 10:26:06.087323 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f02bee8-3f33-4fff-9d32-5bbd8463e842-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-7qmw4\" (UID: \"5f02bee8-3f33-4fff-9d32-5bbd8463e842\") " pod="openstack/dnsmasq-dns-698758b865-7qmw4" Feb 19 10:26:06 crc kubenswrapper[4811]: I0219 10:26:06.087392 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6hxn\" (UniqueName: \"kubernetes.io/projected/5f02bee8-3f33-4fff-9d32-5bbd8463e842-kube-api-access-v6hxn\") pod \"dnsmasq-dns-698758b865-7qmw4\" (UID: \"5f02bee8-3f33-4fff-9d32-5bbd8463e842\") " pod="openstack/dnsmasq-dns-698758b865-7qmw4" Feb 19 10:26:06 crc kubenswrapper[4811]: I0219 10:26:06.087426 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f02bee8-3f33-4fff-9d32-5bbd8463e842-config\") pod \"dnsmasq-dns-698758b865-7qmw4\" (UID: \"5f02bee8-3f33-4fff-9d32-5bbd8463e842\") " pod="openstack/dnsmasq-dns-698758b865-7qmw4" Feb 19 10:26:06 crc kubenswrapper[4811]: I0219 10:26:06.087495 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f02bee8-3f33-4fff-9d32-5bbd8463e842-dns-svc\") pod \"dnsmasq-dns-698758b865-7qmw4\" (UID: \"5f02bee8-3f33-4fff-9d32-5bbd8463e842\") " pod="openstack/dnsmasq-dns-698758b865-7qmw4" Feb 19 10:26:06 crc kubenswrapper[4811]: I0219 10:26:06.087524 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f02bee8-3f33-4fff-9d32-5bbd8463e842-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-7qmw4\" (UID: \"5f02bee8-3f33-4fff-9d32-5bbd8463e842\") " pod="openstack/dnsmasq-dns-698758b865-7qmw4" Feb 19 10:26:06 crc kubenswrapper[4811]: I0219 10:26:06.088539 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f02bee8-3f33-4fff-9d32-5bbd8463e842-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-7qmw4\" (UID: \"5f02bee8-3f33-4fff-9d32-5bbd8463e842\") " pod="openstack/dnsmasq-dns-698758b865-7qmw4" Feb 19 10:26:06 crc kubenswrapper[4811]: I0219 10:26:06.088532 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f02bee8-3f33-4fff-9d32-5bbd8463e842-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-7qmw4\" (UID: \"5f02bee8-3f33-4fff-9d32-5bbd8463e842\") " pod="openstack/dnsmasq-dns-698758b865-7qmw4" Feb 19 10:26:06 crc kubenswrapper[4811]: I0219 10:26:06.088620 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f02bee8-3f33-4fff-9d32-5bbd8463e842-config\") pod \"dnsmasq-dns-698758b865-7qmw4\" (UID: \"5f02bee8-3f33-4fff-9d32-5bbd8463e842\") " pod="openstack/dnsmasq-dns-698758b865-7qmw4" Feb 19 10:26:06 crc kubenswrapper[4811]: I0219 10:26:06.088720 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f02bee8-3f33-4fff-9d32-5bbd8463e842-dns-svc\") pod \"dnsmasq-dns-698758b865-7qmw4\" (UID: \"5f02bee8-3f33-4fff-9d32-5bbd8463e842\") " pod="openstack/dnsmasq-dns-698758b865-7qmw4" Feb 19 10:26:06 crc kubenswrapper[4811]: I0219 10:26:06.107978 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6hxn\" (UniqueName: \"kubernetes.io/projected/5f02bee8-3f33-4fff-9d32-5bbd8463e842-kube-api-access-v6hxn\") pod \"dnsmasq-dns-698758b865-7qmw4\" (UID: \"5f02bee8-3f33-4fff-9d32-5bbd8463e842\") " pod="openstack/dnsmasq-dns-698758b865-7qmw4" Feb 19 10:26:06 crc kubenswrapper[4811]: I0219 10:26:06.155982 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-7qmw4" Feb 19 10:26:06 crc kubenswrapper[4811]: I0219 10:26:06.214891 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-dsmbw" event={"ID":"d2a34c54-3b76-42e5-b4a3-37a238ad1947","Type":"ContainerStarted","Data":"ee24ad4fafd7a076bfad40fc0ab6dad7421549fd27cd8c3f49a2bd087c15f2c6"} Feb 19 10:26:06 crc kubenswrapper[4811]: I0219 10:26:06.216775 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-hwhjb" event={"ID":"8853f172-601e-4f67-96d3-5b910eba6c9a","Type":"ContainerStarted","Data":"cc1aacb7a9c58f241d82b4179b63cf99f1cb7641dcc507dfd72ec8d6ba661248"} Feb 19 10:26:06 crc kubenswrapper[4811]: I0219 10:26:06.218731 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e501-account-create-update-trk4v" event={"ID":"ac222fb2-5295-4eb7-8611-f2a7157f6176","Type":"ContainerStarted","Data":"7bca7c02714ef0973ba0e26bfbce2aa5a45b40b0a1eabe5937d932152deb09e5"} Feb 19 10:26:06 crc kubenswrapper[4811]: I0219 10:26:06.221336 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-407c-account-create-update-pn24j" event={"ID":"2001e70d-4b7a-4c4d-8ed3-54cedaea54ea","Type":"ContainerStarted","Data":"66b07f919175576d7674d4f3ec73a5e49afccbac3b5789d2ea625a7394ec9b24"} Feb 19 10:26:06 crc kubenswrapper[4811]: I0219 10:26:06.683397 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7qmw4"] Feb 19 10:26:06 crc kubenswrapper[4811]: W0219 10:26:06.689517 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f02bee8_3f33_4fff_9d32_5bbd8463e842.slice/crio-a750b837ff14695aed7b8a7e95e9d2c1f35c41be99cd0a3c4c4f4dd1a9727cc7 WatchSource:0}: Error finding container a750b837ff14695aed7b8a7e95e9d2c1f35c41be99cd0a3c4c4f4dd1a9727cc7: Status 404 returned error can't find the container with id a750b837ff14695aed7b8a7e95e9d2c1f35c41be99cd0a3c4c4f4dd1a9727cc7 Feb 19 10:26:07 crc kubenswrapper[4811]: I0219 10:26:07.019970 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 19 10:26:07 crc kubenswrapper[4811]: I0219 10:26:07.025762 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 19 10:26:07 crc kubenswrapper[4811]: I0219 10:26:07.028247 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 19 10:26:07 crc kubenswrapper[4811]: I0219 10:26:07.028258 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 19 10:26:07 crc kubenswrapper[4811]: I0219 10:26:07.028289 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 19 10:26:07 crc kubenswrapper[4811]: I0219 10:26:07.028323 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-xzzb2" Feb 19 10:26:07 crc kubenswrapper[4811]: I0219 10:26:07.049826 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 19 10:26:07 crc kubenswrapper[4811]: I0219 10:26:07.208588 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/dce922ee-7833-4b43-9358-72aa7da07a03-cache\") pod \"swift-storage-0\" (UID: \"dce922ee-7833-4b43-9358-72aa7da07a03\") " pod="openstack/swift-storage-0" Feb 19 10:26:07 crc kubenswrapper[4811]: I0219 10:26:07.208642 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dce922ee-7833-4b43-9358-72aa7da07a03-etc-swift\") pod \"swift-storage-0\" (UID: \"dce922ee-7833-4b43-9358-72aa7da07a03\") " pod="openstack/swift-storage-0" Feb 19 10:26:07 crc kubenswrapper[4811]: I0219 10:26:07.208673 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbvs7\" (UniqueName: \"kubernetes.io/projected/dce922ee-7833-4b43-9358-72aa7da07a03-kube-api-access-cbvs7\") pod \"swift-storage-0\" (UID: \"dce922ee-7833-4b43-9358-72aa7da07a03\") " pod="openstack/swift-storage-0" Feb 19 10:26:07 crc kubenswrapper[4811]: I0219 10:26:07.208783 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"dce922ee-7833-4b43-9358-72aa7da07a03\") " pod="openstack/swift-storage-0" Feb 19 10:26:07 crc kubenswrapper[4811]: I0219 10:26:07.208825 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/dce922ee-7833-4b43-9358-72aa7da07a03-lock\") pod \"swift-storage-0\" (UID: \"dce922ee-7833-4b43-9358-72aa7da07a03\") " pod="openstack/swift-storage-0" Feb 19 10:26:07 crc kubenswrapper[4811]: I0219 10:26:07.208910 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dce922ee-7833-4b43-9358-72aa7da07a03-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"dce922ee-7833-4b43-9358-72aa7da07a03\") " pod="openstack/swift-storage-0" Feb 19 10:26:07 crc kubenswrapper[4811]: I0219 10:26:07.227874 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7qmw4" event={"ID":"5f02bee8-3f33-4fff-9d32-5bbd8463e842","Type":"ContainerStarted","Data":"a750b837ff14695aed7b8a7e95e9d2c1f35c41be99cd0a3c4c4f4dd1a9727cc7"} Feb 19 10:26:07 crc kubenswrapper[4811]: I0219 10:26:07.310259 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbvs7\" (UniqueName: \"kubernetes.io/projected/dce922ee-7833-4b43-9358-72aa7da07a03-kube-api-access-cbvs7\") pod \"swift-storage-0\" (UID: \"dce922ee-7833-4b43-9358-72aa7da07a03\") " pod="openstack/swift-storage-0" Feb 19 10:26:07 crc kubenswrapper[4811]: I0219 10:26:07.310360 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"dce922ee-7833-4b43-9358-72aa7da07a03\") " pod="openstack/swift-storage-0" Feb 19 10:26:07 crc kubenswrapper[4811]: I0219 10:26:07.310401 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/dce922ee-7833-4b43-9358-72aa7da07a03-lock\") pod \"swift-storage-0\" (UID: \"dce922ee-7833-4b43-9358-72aa7da07a03\") " pod="openstack/swift-storage-0" Feb 19 10:26:07 crc kubenswrapper[4811]: I0219 10:26:07.310494 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dce922ee-7833-4b43-9358-72aa7da07a03-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"dce922ee-7833-4b43-9358-72aa7da07a03\") " pod="openstack/swift-storage-0" Feb 19 10:26:07 crc kubenswrapper[4811]: I0219 10:26:07.310526 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/dce922ee-7833-4b43-9358-72aa7da07a03-cache\") pod \"swift-storage-0\" (UID: \"dce922ee-7833-4b43-9358-72aa7da07a03\") " pod="openstack/swift-storage-0" Feb 19 10:26:07 crc kubenswrapper[4811]: I0219 10:26:07.310545 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dce922ee-7833-4b43-9358-72aa7da07a03-etc-swift\") pod \"swift-storage-0\" (UID: \"dce922ee-7833-4b43-9358-72aa7da07a03\") " pod="openstack/swift-storage-0" Feb 19 10:26:07 crc kubenswrapper[4811]: E0219 10:26:07.310684 4811 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 10:26:07 crc kubenswrapper[4811]: E0219 10:26:07.310697 4811 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 10:26:07 crc kubenswrapper[4811]: E0219 10:26:07.310743 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dce922ee-7833-4b43-9358-72aa7da07a03-etc-swift podName:dce922ee-7833-4b43-9358-72aa7da07a03 nodeName:}" failed. No retries permitted until 2026-02-19 10:26:07.81072589 +0000 UTC m=+1056.337190576 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/dce922ee-7833-4b43-9358-72aa7da07a03-etc-swift") pod "swift-storage-0" (UID: "dce922ee-7833-4b43-9358-72aa7da07a03") : configmap "swift-ring-files" not found Feb 19 10:26:07 crc kubenswrapper[4811]: I0219 10:26:07.311100 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/dce922ee-7833-4b43-9358-72aa7da07a03-cache\") pod \"swift-storage-0\" (UID: \"dce922ee-7833-4b43-9358-72aa7da07a03\") " pod="openstack/swift-storage-0" Feb 19 10:26:07 crc kubenswrapper[4811]: I0219 10:26:07.312721 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/dce922ee-7833-4b43-9358-72aa7da07a03-lock\") pod \"swift-storage-0\" (UID: \"dce922ee-7833-4b43-9358-72aa7da07a03\") " pod="openstack/swift-storage-0" Feb 19 10:26:07 crc kubenswrapper[4811]: I0219 10:26:07.313012 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"dce922ee-7833-4b43-9358-72aa7da07a03\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/swift-storage-0" Feb 19 10:26:07 crc kubenswrapper[4811]: I0219 10:26:07.322089 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dce922ee-7833-4b43-9358-72aa7da07a03-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"dce922ee-7833-4b43-9358-72aa7da07a03\") " pod="openstack/swift-storage-0" Feb 19 10:26:07 crc kubenswrapper[4811]: I0219 10:26:07.328325 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbvs7\" (UniqueName: \"kubernetes.io/projected/dce922ee-7833-4b43-9358-72aa7da07a03-kube-api-access-cbvs7\") pod \"swift-storage-0\" (UID: \"dce922ee-7833-4b43-9358-72aa7da07a03\") " pod="openstack/swift-storage-0" Feb 19 10:26:07 crc kubenswrapper[4811]: I0219 10:26:07.341544 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"dce922ee-7833-4b43-9358-72aa7da07a03\") " pod="openstack/swift-storage-0" Feb 19 10:26:07 crc kubenswrapper[4811]: I0219 10:26:07.817867 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dce922ee-7833-4b43-9358-72aa7da07a03-etc-swift\") pod \"swift-storage-0\" (UID: \"dce922ee-7833-4b43-9358-72aa7da07a03\") " pod="openstack/swift-storage-0" Feb 19 10:26:07 crc kubenswrapper[4811]: E0219 10:26:07.818076 4811 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 10:26:07 crc kubenswrapper[4811]: E0219 10:26:07.818104 4811 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 10:26:07 crc kubenswrapper[4811]: E0219 10:26:07.818162 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dce922ee-7833-4b43-9358-72aa7da07a03-etc-swift podName:dce922ee-7833-4b43-9358-72aa7da07a03 nodeName:}" failed. No retries permitted until 2026-02-19 10:26:08.818143865 +0000 UTC m=+1057.344608551 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/dce922ee-7833-4b43-9358-72aa7da07a03-etc-swift") pod "swift-storage-0" (UID: "dce922ee-7833-4b43-9358-72aa7da07a03") : configmap "swift-ring-files" not found Feb 19 10:26:08 crc kubenswrapper[4811]: I0219 10:26:08.603369 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-gfqfc"] Feb 19 10:26:08 crc kubenswrapper[4811]: I0219 10:26:08.605245 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gfqfc" Feb 19 10:26:08 crc kubenswrapper[4811]: I0219 10:26:08.615936 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-gfqfc"] Feb 19 10:26:08 crc kubenswrapper[4811]: I0219 10:26:08.699623 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-bdcd-account-create-update-x87fb"] Feb 19 10:26:08 crc kubenswrapper[4811]: I0219 10:26:08.700661 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bdcd-account-create-update-x87fb" Feb 19 10:26:08 crc kubenswrapper[4811]: I0219 10:26:08.703041 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 19 10:26:08 crc kubenswrapper[4811]: I0219 10:26:08.710414 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-bdcd-account-create-update-x87fb"] Feb 19 10:26:08 crc kubenswrapper[4811]: I0219 10:26:08.735008 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6dc01af-9510-4de2-a0ae-a841cc7bf815-operator-scripts\") pod \"glance-db-create-gfqfc\" (UID: \"f6dc01af-9510-4de2-a0ae-a841cc7bf815\") " pod="openstack/glance-db-create-gfqfc" Feb 19 10:26:08 crc kubenswrapper[4811]: I0219 10:26:08.735090 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dspmm\" (UniqueName: \"kubernetes.io/projected/f6dc01af-9510-4de2-a0ae-a841cc7bf815-kube-api-access-dspmm\") pod \"glance-db-create-gfqfc\" (UID: \"f6dc01af-9510-4de2-a0ae-a841cc7bf815\") " pod="openstack/glance-db-create-gfqfc" Feb 19 10:26:08 crc kubenswrapper[4811]: I0219 10:26:08.836567 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dce922ee-7833-4b43-9358-72aa7da07a03-etc-swift\") pod \"swift-storage-0\" (UID: \"dce922ee-7833-4b43-9358-72aa7da07a03\") " pod="openstack/swift-storage-0" Feb 19 10:26:08 crc kubenswrapper[4811]: I0219 10:26:08.836617 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6dc01af-9510-4de2-a0ae-a841cc7bf815-operator-scripts\") pod \"glance-db-create-gfqfc\" (UID: \"f6dc01af-9510-4de2-a0ae-a841cc7bf815\") " pod="openstack/glance-db-create-gfqfc" Feb 19 10:26:08 crc kubenswrapper[4811]: I0219 10:26:08.836654 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dspmm\" (UniqueName: \"kubernetes.io/projected/f6dc01af-9510-4de2-a0ae-a841cc7bf815-kube-api-access-dspmm\") pod \"glance-db-create-gfqfc\" (UID: \"f6dc01af-9510-4de2-a0ae-a841cc7bf815\") " pod="openstack/glance-db-create-gfqfc" Feb 19 10:26:08 crc kubenswrapper[4811]: I0219 10:26:08.836692 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csbsz\" (UniqueName: \"kubernetes.io/projected/cd758757-2f52-4fe1-b24f-4ca697bf6498-kube-api-access-csbsz\") pod \"glance-bdcd-account-create-update-x87fb\" (UID: \"cd758757-2f52-4fe1-b24f-4ca697bf6498\") " pod="openstack/glance-bdcd-account-create-update-x87fb" Feb 19 10:26:08 crc kubenswrapper[4811]: I0219 10:26:08.836769 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd758757-2f52-4fe1-b24f-4ca697bf6498-operator-scripts\") pod \"glance-bdcd-account-create-update-x87fb\" (UID: \"cd758757-2f52-4fe1-b24f-4ca697bf6498\") " pod="openstack/glance-bdcd-account-create-update-x87fb" Feb 19 10:26:08 crc kubenswrapper[4811]: E0219 10:26:08.836833 4811 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 10:26:08 crc kubenswrapper[4811]: E0219 10:26:08.836868 4811 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 10:26:08 crc kubenswrapper[4811]: E0219 10:26:08.836987 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dce922ee-7833-4b43-9358-72aa7da07a03-etc-swift podName:dce922ee-7833-4b43-9358-72aa7da07a03 nodeName:}" failed. No retries permitted until 2026-02-19 10:26:10.836958876 +0000 UTC m=+1059.363423562 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/dce922ee-7833-4b43-9358-72aa7da07a03-etc-swift") pod "swift-storage-0" (UID: "dce922ee-7833-4b43-9358-72aa7da07a03") : configmap "swift-ring-files" not found Feb 19 10:26:08 crc kubenswrapper[4811]: I0219 10:26:08.837930 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6dc01af-9510-4de2-a0ae-a841cc7bf815-operator-scripts\") pod \"glance-db-create-gfqfc\" (UID: \"f6dc01af-9510-4de2-a0ae-a841cc7bf815\") " pod="openstack/glance-db-create-gfqfc" Feb 19 10:26:08 crc kubenswrapper[4811]: I0219 10:26:08.858112 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dspmm\" (UniqueName: \"kubernetes.io/projected/f6dc01af-9510-4de2-a0ae-a841cc7bf815-kube-api-access-dspmm\") pod \"glance-db-create-gfqfc\" (UID: \"f6dc01af-9510-4de2-a0ae-a841cc7bf815\") " pod="openstack/glance-db-create-gfqfc" Feb 19 10:26:08 crc kubenswrapper[4811]: I0219 10:26:08.938278 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gfqfc" Feb 19 10:26:08 crc kubenswrapper[4811]: I0219 10:26:08.938947 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csbsz\" (UniqueName: \"kubernetes.io/projected/cd758757-2f52-4fe1-b24f-4ca697bf6498-kube-api-access-csbsz\") pod \"glance-bdcd-account-create-update-x87fb\" (UID: \"cd758757-2f52-4fe1-b24f-4ca697bf6498\") " pod="openstack/glance-bdcd-account-create-update-x87fb" Feb 19 10:26:08 crc kubenswrapper[4811]: I0219 10:26:08.939143 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd758757-2f52-4fe1-b24f-4ca697bf6498-operator-scripts\") pod \"glance-bdcd-account-create-update-x87fb\" (UID: \"cd758757-2f52-4fe1-b24f-4ca697bf6498\") " pod="openstack/glance-bdcd-account-create-update-x87fb" Feb 19 10:26:08 crc kubenswrapper[4811]: I0219 10:26:08.939925 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd758757-2f52-4fe1-b24f-4ca697bf6498-operator-scripts\") pod \"glance-bdcd-account-create-update-x87fb\" (UID: \"cd758757-2f52-4fe1-b24f-4ca697bf6498\") " pod="openstack/glance-bdcd-account-create-update-x87fb" Feb 19 10:26:08 crc kubenswrapper[4811]: I0219 10:26:08.957969 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csbsz\" (UniqueName: \"kubernetes.io/projected/cd758757-2f52-4fe1-b24f-4ca697bf6498-kube-api-access-csbsz\") pod \"glance-bdcd-account-create-update-x87fb\" (UID: \"cd758757-2f52-4fe1-b24f-4ca697bf6498\") " pod="openstack/glance-bdcd-account-create-update-x87fb" Feb 19 10:26:09 crc kubenswrapper[4811]: I0219 10:26:09.016694 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bdcd-account-create-update-x87fb" Feb 19 10:26:09 crc kubenswrapper[4811]: I0219 10:26:09.403714 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-gfqfc"] Feb 19 10:26:09 crc kubenswrapper[4811]: I0219 10:26:09.513557 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-bdcd-account-create-update-x87fb"] Feb 19 10:26:09 crc kubenswrapper[4811]: W0219 10:26:09.523607 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd758757_2f52_4fe1_b24f_4ca697bf6498.slice/crio-74326ca1924d8adb0ef2e8a8b0d91e011684b0f7fec4ea176eada8e821830234 WatchSource:0}: Error finding container 74326ca1924d8adb0ef2e8a8b0d91e011684b0f7fec4ea176eada8e821830234: Status 404 returned error can't find the container with id 74326ca1924d8adb0ef2e8a8b0d91e011684b0f7fec4ea176eada8e821830234 Feb 19 10:26:10 crc kubenswrapper[4811]: I0219 10:26:10.254121 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bdcd-account-create-update-x87fb" event={"ID":"cd758757-2f52-4fe1-b24f-4ca697bf6498","Type":"ContainerStarted","Data":"9f967a3ebea9d2591d2dc2599133d5e6aa076323a607f66e1cf33dc6cfe6521a"} Feb 19 10:26:10 crc kubenswrapper[4811]: I0219 10:26:10.254524 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bdcd-account-create-update-x87fb" event={"ID":"cd758757-2f52-4fe1-b24f-4ca697bf6498","Type":"ContainerStarted","Data":"74326ca1924d8adb0ef2e8a8b0d91e011684b0f7fec4ea176eada8e821830234"} Feb 19 10:26:10 crc kubenswrapper[4811]: I0219 10:26:10.255422 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-dsmbw" event={"ID":"d2a34c54-3b76-42e5-b4a3-37a238ad1947","Type":"ContainerStarted","Data":"9e50fa06f94b3a7d7d8ffa16bbf1991a4a20f0f5ab833c20e5f22b844e6655e3"} Feb 19 10:26:10 crc kubenswrapper[4811]: I0219 10:26:10.257799 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-hwhjb" event={"ID":"8853f172-601e-4f67-96d3-5b910eba6c9a","Type":"ContainerStarted","Data":"6501c8e1318cd53ed167bc7b7b443a15d607b118afd33cc80d528689c41db2e8"} Feb 19 10:26:10 crc kubenswrapper[4811]: I0219 10:26:10.260608 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7qmw4" event={"ID":"5f02bee8-3f33-4fff-9d32-5bbd8463e842","Type":"ContainerStarted","Data":"6c4d603237453babd3f8a649fe46dda5e0b5fe7934efda14d04f274fbcbb64b7"} Feb 19 10:26:10 crc kubenswrapper[4811]: I0219 10:26:10.263943 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e501-account-create-update-trk4v" event={"ID":"ac222fb2-5295-4eb7-8611-f2a7157f6176","Type":"ContainerStarted","Data":"dca9145c764fa95317a9a216c692c289f7a09ea6a226c7feee7c413906b75919"} Feb 19 10:26:10 crc kubenswrapper[4811]: I0219 10:26:10.265740 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gfqfc" event={"ID":"f6dc01af-9510-4de2-a0ae-a841cc7bf815","Type":"ContainerStarted","Data":"c17e6fd5eeb6b419c26849f83d1668f3dc39420162b853654baa0b8ee58e683b"} Feb 19 10:26:10 crc kubenswrapper[4811]: I0219 10:26:10.265990 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gfqfc" event={"ID":"f6dc01af-9510-4de2-a0ae-a841cc7bf815","Type":"ContainerStarted","Data":"60eacaf05dcf7c83744f36940956c617654ea044d88bac8b818237e026ae2eb5"} Feb 19 10:26:10 crc kubenswrapper[4811]: I0219 10:26:10.269528 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-407c-account-create-update-pn24j" event={"ID":"2001e70d-4b7a-4c4d-8ed3-54cedaea54ea","Type":"ContainerStarted","Data":"8f59ea20915b24e2a6c5c106c57748d1c9fc01e4ba55ecc3416bb13c7f997880"} Feb 19 10:26:10 crc kubenswrapper[4811]: I0219 10:26:10.275220 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-dsmbw" podStartSLOduration=6.275197019 podStartE2EDuration="6.275197019s" podCreationTimestamp="2026-02-19 10:26:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:26:10.268758155 +0000 UTC m=+1058.795222841" watchObservedRunningTime="2026-02-19 10:26:10.275197019 +0000 UTC m=+1058.801661705" Feb 19 10:26:10 crc kubenswrapper[4811]: I0219 10:26:10.504550 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-m2sl2"] Feb 19 10:26:10 crc kubenswrapper[4811]: I0219 10:26:10.506170 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-m2sl2" Feb 19 10:26:10 crc kubenswrapper[4811]: I0219 10:26:10.509045 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 19 10:26:10 crc kubenswrapper[4811]: I0219 10:26:10.518944 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-m2sl2"] Feb 19 10:26:10 crc kubenswrapper[4811]: I0219 10:26:10.680415 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfmh2\" (UniqueName: \"kubernetes.io/projected/4999e042-333a-4d0b-815e-31da16e22f72-kube-api-access-wfmh2\") pod \"root-account-create-update-m2sl2\" (UID: \"4999e042-333a-4d0b-815e-31da16e22f72\") " pod="openstack/root-account-create-update-m2sl2" Feb 19 10:26:10 crc kubenswrapper[4811]: I0219 10:26:10.680501 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4999e042-333a-4d0b-815e-31da16e22f72-operator-scripts\") pod \"root-account-create-update-m2sl2\" (UID: \"4999e042-333a-4d0b-815e-31da16e22f72\") " pod="openstack/root-account-create-update-m2sl2" Feb 19 10:26:10 crc kubenswrapper[4811]: I0219 10:26:10.782339 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfmh2\" (UniqueName: \"kubernetes.io/projected/4999e042-333a-4d0b-815e-31da16e22f72-kube-api-access-wfmh2\") pod \"root-account-create-update-m2sl2\" (UID: \"4999e042-333a-4d0b-815e-31da16e22f72\") " pod="openstack/root-account-create-update-m2sl2" Feb 19 10:26:10 crc kubenswrapper[4811]: I0219 10:26:10.782409 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4999e042-333a-4d0b-815e-31da16e22f72-operator-scripts\") pod \"root-account-create-update-m2sl2\" (UID: \"4999e042-333a-4d0b-815e-31da16e22f72\") " pod="openstack/root-account-create-update-m2sl2" Feb 19 10:26:10 crc kubenswrapper[4811]: I0219 10:26:10.784022 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4999e042-333a-4d0b-815e-31da16e22f72-operator-scripts\") pod \"root-account-create-update-m2sl2\" (UID: \"4999e042-333a-4d0b-815e-31da16e22f72\") " pod="openstack/root-account-create-update-m2sl2" Feb 19 10:26:10 crc kubenswrapper[4811]: I0219 10:26:10.806590 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfmh2\" (UniqueName: \"kubernetes.io/projected/4999e042-333a-4d0b-815e-31da16e22f72-kube-api-access-wfmh2\") pod \"root-account-create-update-m2sl2\" (UID: \"4999e042-333a-4d0b-815e-31da16e22f72\") " pod="openstack/root-account-create-update-m2sl2" Feb 19 10:26:10 crc kubenswrapper[4811]: I0219 10:26:10.822425 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-m2sl2" Feb 19 10:26:10 crc kubenswrapper[4811]: I0219 10:26:10.885543 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dce922ee-7833-4b43-9358-72aa7da07a03-etc-swift\") pod \"swift-storage-0\" (UID: \"dce922ee-7833-4b43-9358-72aa7da07a03\") " pod="openstack/swift-storage-0" Feb 19 10:26:10 crc kubenswrapper[4811]: E0219 10:26:10.885746 4811 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 10:26:10 crc kubenswrapper[4811]: E0219 10:26:10.886077 4811 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 10:26:10 crc kubenswrapper[4811]: E0219 10:26:10.886140 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dce922ee-7833-4b43-9358-72aa7da07a03-etc-swift podName:dce922ee-7833-4b43-9358-72aa7da07a03 nodeName:}" failed. No retries permitted until 2026-02-19 10:26:14.886121285 +0000 UTC m=+1063.412585981 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/dce922ee-7833-4b43-9358-72aa7da07a03-etc-swift") pod "swift-storage-0" (UID: "dce922ee-7833-4b43-9358-72aa7da07a03") : configmap "swift-ring-files" not found Feb 19 10:26:10 crc kubenswrapper[4811]: I0219 10:26:10.904268 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-r67q5"] Feb 19 10:26:10 crc kubenswrapper[4811]: I0219 10:26:10.905993 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-r67q5" Feb 19 10:26:10 crc kubenswrapper[4811]: I0219 10:26:10.910148 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 19 10:26:10 crc kubenswrapper[4811]: I0219 10:26:10.910606 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 19 10:26:10 crc kubenswrapper[4811]: I0219 10:26:10.910776 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 19 10:26:10 crc kubenswrapper[4811]: I0219 10:26:10.912053 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-r67q5"] Feb 19 10:26:11 crc kubenswrapper[4811]: I0219 10:26:11.088142 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-dispersionconf\") pod \"swift-ring-rebalance-r67q5\" (UID: \"30bb0370-e0d8-4aaf-8a07-35ab4af890c6\") " pod="openstack/swift-ring-rebalance-r67q5" Feb 19 10:26:11 crc kubenswrapper[4811]: I0219 10:26:11.088207 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rljw9\" (UniqueName: \"kubernetes.io/projected/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-kube-api-access-rljw9\") pod \"swift-ring-rebalance-r67q5\" (UID: \"30bb0370-e0d8-4aaf-8a07-35ab4af890c6\") " pod="openstack/swift-ring-rebalance-r67q5" Feb 19 10:26:11 crc kubenswrapper[4811]: I0219 10:26:11.088276 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-combined-ca-bundle\") pod \"swift-ring-rebalance-r67q5\" (UID: \"30bb0370-e0d8-4aaf-8a07-35ab4af890c6\") " pod="openstack/swift-ring-rebalance-r67q5" Feb 19 10:26:11 crc kubenswrapper[4811]: I0219 10:26:11.088318 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-scripts\") pod \"swift-ring-rebalance-r67q5\" (UID: \"30bb0370-e0d8-4aaf-8a07-35ab4af890c6\") " pod="openstack/swift-ring-rebalance-r67q5" Feb 19 10:26:11 crc kubenswrapper[4811]: I0219 10:26:11.088343 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-swiftconf\") pod \"swift-ring-rebalance-r67q5\" (UID: \"30bb0370-e0d8-4aaf-8a07-35ab4af890c6\") " pod="openstack/swift-ring-rebalance-r67q5" Feb 19 10:26:11 crc kubenswrapper[4811]: I0219 10:26:11.088378 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-ring-data-devices\") pod \"swift-ring-rebalance-r67q5\" (UID: \"30bb0370-e0d8-4aaf-8a07-35ab4af890c6\") " pod="openstack/swift-ring-rebalance-r67q5" Feb 19 10:26:11 crc kubenswrapper[4811]: I0219 10:26:11.088403 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-etc-swift\") pod \"swift-ring-rebalance-r67q5\" (UID: \"30bb0370-e0d8-4aaf-8a07-35ab4af890c6\") " pod="openstack/swift-ring-rebalance-r67q5" Feb 19 10:26:11 crc kubenswrapper[4811]: I0219 10:26:11.189957 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-combined-ca-bundle\") pod \"swift-ring-rebalance-r67q5\" (UID: \"30bb0370-e0d8-4aaf-8a07-35ab4af890c6\") " pod="openstack/swift-ring-rebalance-r67q5" Feb 19 10:26:11 crc kubenswrapper[4811]: I0219 10:26:11.190083 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-scripts\") pod \"swift-ring-rebalance-r67q5\" (UID: \"30bb0370-e0d8-4aaf-8a07-35ab4af890c6\") " pod="openstack/swift-ring-rebalance-r67q5" Feb 19 10:26:11 crc kubenswrapper[4811]: I0219 10:26:11.190116 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-swiftconf\") pod \"swift-ring-rebalance-r67q5\" (UID: \"30bb0370-e0d8-4aaf-8a07-35ab4af890c6\") " pod="openstack/swift-ring-rebalance-r67q5" Feb 19 10:26:11 crc kubenswrapper[4811]: I0219 10:26:11.190158 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-ring-data-devices\") pod \"swift-ring-rebalance-r67q5\" (UID: \"30bb0370-e0d8-4aaf-8a07-35ab4af890c6\") " pod="openstack/swift-ring-rebalance-r67q5" Feb 19 10:26:11 crc kubenswrapper[4811]: I0219 10:26:11.190189 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-etc-swift\") pod \"swift-ring-rebalance-r67q5\" (UID: \"30bb0370-e0d8-4aaf-8a07-35ab4af890c6\") " pod="openstack/swift-ring-rebalance-r67q5" Feb 19 10:26:11 crc kubenswrapper[4811]: I0219 10:26:11.190215 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-dispersionconf\") pod \"swift-ring-rebalance-r67q5\" (UID: \"30bb0370-e0d8-4aaf-8a07-35ab4af890c6\") " pod="openstack/swift-ring-rebalance-r67q5" Feb 19 10:26:11 crc kubenswrapper[4811]: I0219 10:26:11.190262 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rljw9\" (UniqueName: \"kubernetes.io/projected/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-kube-api-access-rljw9\") pod \"swift-ring-rebalance-r67q5\" (UID: \"30bb0370-e0d8-4aaf-8a07-35ab4af890c6\") " pod="openstack/swift-ring-rebalance-r67q5" Feb 19 10:26:11 crc kubenswrapper[4811]: I0219 10:26:11.191746 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-etc-swift\") pod \"swift-ring-rebalance-r67q5\" (UID: \"30bb0370-e0d8-4aaf-8a07-35ab4af890c6\") " pod="openstack/swift-ring-rebalance-r67q5" Feb 19 10:26:11 crc kubenswrapper[4811]: I0219 10:26:11.191810 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-ring-data-devices\") pod \"swift-ring-rebalance-r67q5\" (UID: \"30bb0370-e0d8-4aaf-8a07-35ab4af890c6\") " pod="openstack/swift-ring-rebalance-r67q5" Feb 19 10:26:11 crc kubenswrapper[4811]: I0219 10:26:11.192933 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-scripts\") pod \"swift-ring-rebalance-r67q5\" (UID: \"30bb0370-e0d8-4aaf-8a07-35ab4af890c6\") " pod="openstack/swift-ring-rebalance-r67q5" Feb 19 10:26:11 crc kubenswrapper[4811]: I0219 10:26:11.199055 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-dispersionconf\") pod \"swift-ring-rebalance-r67q5\" (UID: \"30bb0370-e0d8-4aaf-8a07-35ab4af890c6\") " pod="openstack/swift-ring-rebalance-r67q5" Feb 19 10:26:11 crc kubenswrapper[4811]: I0219 10:26:11.199587 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-swiftconf\") pod \"swift-ring-rebalance-r67q5\" (UID: \"30bb0370-e0d8-4aaf-8a07-35ab4af890c6\") " pod="openstack/swift-ring-rebalance-r67q5" Feb 19 10:26:11 crc kubenswrapper[4811]: I0219 10:26:11.200339 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-combined-ca-bundle\") pod \"swift-ring-rebalance-r67q5\" (UID: \"30bb0370-e0d8-4aaf-8a07-35ab4af890c6\") " pod="openstack/swift-ring-rebalance-r67q5" Feb 19 10:26:11 crc kubenswrapper[4811]: I0219 10:26:11.206235 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rljw9\" (UniqueName: \"kubernetes.io/projected/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-kube-api-access-rljw9\") pod \"swift-ring-rebalance-r67q5\" (UID: \"30bb0370-e0d8-4aaf-8a07-35ab4af890c6\") " pod="openstack/swift-ring-rebalance-r67q5" Feb 19 10:26:11 crc kubenswrapper[4811]: I0219 10:26:11.284255 4811 generic.go:334] "Generic (PLEG): container finished" podID="2001e70d-4b7a-4c4d-8ed3-54cedaea54ea" containerID="8f59ea20915b24e2a6c5c106c57748d1c9fc01e4ba55ecc3416bb13c7f997880" exitCode=0 Feb 19 10:26:11 crc kubenswrapper[4811]: I0219 10:26:11.285217 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-407c-account-create-update-pn24j" event={"ID":"2001e70d-4b7a-4c4d-8ed3-54cedaea54ea","Type":"ContainerDied","Data":"8f59ea20915b24e2a6c5c106c57748d1c9fc01e4ba55ecc3416bb13c7f997880"} Feb 19 10:26:11 crc kubenswrapper[4811]: I0219 10:26:11.287394 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-r67q5" Feb 19 10:26:11 crc kubenswrapper[4811]: I0219 10:26:11.290018 4811 generic.go:334] "Generic (PLEG): container finished" podID="cd758757-2f52-4fe1-b24f-4ca697bf6498" containerID="9f967a3ebea9d2591d2dc2599133d5e6aa076323a607f66e1cf33dc6cfe6521a" exitCode=0 Feb 19 10:26:11 crc kubenswrapper[4811]: I0219 10:26:11.290089 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bdcd-account-create-update-x87fb" event={"ID":"cd758757-2f52-4fe1-b24f-4ca697bf6498","Type":"ContainerDied","Data":"9f967a3ebea9d2591d2dc2599133d5e6aa076323a607f66e1cf33dc6cfe6521a"} Feb 19 10:26:11 crc kubenswrapper[4811]: I0219 10:26:11.291909 4811 generic.go:334] "Generic (PLEG): container finished" podID="d2a34c54-3b76-42e5-b4a3-37a238ad1947" containerID="9e50fa06f94b3a7d7d8ffa16bbf1991a4a20f0f5ab833c20e5f22b844e6655e3" exitCode=0 Feb 19 10:26:11 crc kubenswrapper[4811]: I0219 10:26:11.291983 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-dsmbw" event={"ID":"d2a34c54-3b76-42e5-b4a3-37a238ad1947","Type":"ContainerDied","Data":"9e50fa06f94b3a7d7d8ffa16bbf1991a4a20f0f5ab833c20e5f22b844e6655e3"} Feb 19 10:26:11 crc kubenswrapper[4811]: I0219 10:26:11.293853 4811 generic.go:334] "Generic (PLEG): container finished" podID="8853f172-601e-4f67-96d3-5b910eba6c9a" containerID="6501c8e1318cd53ed167bc7b7b443a15d607b118afd33cc80d528689c41db2e8" exitCode=0 Feb 19 10:26:11 crc kubenswrapper[4811]: I0219 10:26:11.293910 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-hwhjb" event={"ID":"8853f172-601e-4f67-96d3-5b910eba6c9a","Type":"ContainerDied","Data":"6501c8e1318cd53ed167bc7b7b443a15d607b118afd33cc80d528689c41db2e8"} Feb 19 10:26:11 crc kubenswrapper[4811]: I0219 10:26:11.301196 4811 generic.go:334] "Generic (PLEG): container finished" podID="5f02bee8-3f33-4fff-9d32-5bbd8463e842" containerID="6c4d603237453babd3f8a649fe46dda5e0b5fe7934efda14d04f274fbcbb64b7" exitCode=0 Feb 19 10:26:11 crc kubenswrapper[4811]: I0219 10:26:11.301563 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7qmw4" event={"ID":"5f02bee8-3f33-4fff-9d32-5bbd8463e842","Type":"ContainerDied","Data":"6c4d603237453babd3f8a649fe46dda5e0b5fe7934efda14d04f274fbcbb64b7"} Feb 19 10:26:11 crc kubenswrapper[4811]: I0219 10:26:11.307628 4811 generic.go:334] "Generic (PLEG): container finished" podID="ac222fb2-5295-4eb7-8611-f2a7157f6176" containerID="dca9145c764fa95317a9a216c692c289f7a09ea6a226c7feee7c413906b75919" exitCode=0 Feb 19 10:26:11 crc kubenswrapper[4811]: I0219 10:26:11.307859 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e501-account-create-update-trk4v" event={"ID":"ac222fb2-5295-4eb7-8611-f2a7157f6176","Type":"ContainerDied","Data":"dca9145c764fa95317a9a216c692c289f7a09ea6a226c7feee7c413906b75919"} Feb 19 10:26:11 crc kubenswrapper[4811]: I0219 10:26:11.318624 4811 generic.go:334] "Generic (PLEG): container finished" podID="f6dc01af-9510-4de2-a0ae-a841cc7bf815" containerID="c17e6fd5eeb6b419c26849f83d1668f3dc39420162b853654baa0b8ee58e683b" exitCode=0 Feb 19 10:26:11 crc kubenswrapper[4811]: I0219 10:26:11.318717 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gfqfc" event={"ID":"f6dc01af-9510-4de2-a0ae-a841cc7bf815","Type":"ContainerDied","Data":"c17e6fd5eeb6b419c26849f83d1668f3dc39420162b853654baa0b8ee58e683b"} Feb 19 10:26:11 crc kubenswrapper[4811]: I0219 10:26:11.333236 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-m2sl2"] Feb 19 10:26:11 crc kubenswrapper[4811]: W0219 10:26:11.336621 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4999e042_333a_4d0b_815e_31da16e22f72.slice/crio-2c40a6711bed86b0a30db1a170eb17f65ff45abac4cad69b7ca649c1c173846d WatchSource:0}: Error finding container 2c40a6711bed86b0a30db1a170eb17f65ff45abac4cad69b7ca649c1c173846d: Status 404 returned error can't find the container with id 2c40a6711bed86b0a30db1a170eb17f65ff45abac4cad69b7ca649c1c173846d Feb 19 10:26:11 crc kubenswrapper[4811]: I0219 10:26:11.778197 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-r67q5"] Feb 19 10:26:11 crc kubenswrapper[4811]: W0219 10:26:11.842120 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30bb0370_e0d8_4aaf_8a07_35ab4af890c6.slice/crio-c7913432578cd9e1fbc3656ee35245fec52f011b89e1896f569d5e011525bb7a WatchSource:0}: Error finding container c7913432578cd9e1fbc3656ee35245fec52f011b89e1896f569d5e011525bb7a: Status 404 returned error can't find the container with id c7913432578cd9e1fbc3656ee35245fec52f011b89e1896f569d5e011525bb7a Feb 19 10:26:12 crc kubenswrapper[4811]: I0219 10:26:12.333603 4811 generic.go:334] "Generic (PLEG): container finished" podID="4999e042-333a-4d0b-815e-31da16e22f72" containerID="f0ee88a943abf03f9f0bdfcb2e6f6fb44678c9352a36a815d3bd5cbde010af1a" exitCode=0 Feb 19 10:26:12 crc kubenswrapper[4811]: I0219 10:26:12.333672 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-m2sl2" event={"ID":"4999e042-333a-4d0b-815e-31da16e22f72","Type":"ContainerDied","Data":"f0ee88a943abf03f9f0bdfcb2e6f6fb44678c9352a36a815d3bd5cbde010af1a"} Feb 19 10:26:12 crc kubenswrapper[4811]: I0219 10:26:12.333695 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-m2sl2" event={"ID":"4999e042-333a-4d0b-815e-31da16e22f72","Type":"ContainerStarted","Data":"2c40a6711bed86b0a30db1a170eb17f65ff45abac4cad69b7ca649c1c173846d"} Feb 19 10:26:12 crc kubenswrapper[4811]: I0219 10:26:12.335861 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7qmw4" event={"ID":"5f02bee8-3f33-4fff-9d32-5bbd8463e842","Type":"ContainerStarted","Data":"8470527caf2dcba0a0c9b4a578bffec3fbfc9426f77526ec80f2c9c0d268ed4d"} Feb 19 10:26:12 crc kubenswrapper[4811]: I0219 10:26:12.336932 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-7qmw4" Feb 19 10:26:12 crc kubenswrapper[4811]: I0219 10:26:12.338912 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-r67q5" event={"ID":"30bb0370-e0d8-4aaf-8a07-35ab4af890c6","Type":"ContainerStarted","Data":"c7913432578cd9e1fbc3656ee35245fec52f011b89e1896f569d5e011525bb7a"} Feb 19 10:26:12 crc kubenswrapper[4811]: I0219 10:26:12.381425 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-7qmw4" podStartSLOduration=7.381405421 podStartE2EDuration="7.381405421s" podCreationTimestamp="2026-02-19 10:26:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:26:12.375967713 +0000 UTC m=+1060.902432419" watchObservedRunningTime="2026-02-19 10:26:12.381405421 +0000 UTC m=+1060.907870107" Feb 19 10:26:12 crc kubenswrapper[4811]: I0219 10:26:12.797488 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-dsmbw" Feb 19 10:26:12 crc kubenswrapper[4811]: I0219 10:26:12.835983 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2a34c54-3b76-42e5-b4a3-37a238ad1947-operator-scripts\") pod \"d2a34c54-3b76-42e5-b4a3-37a238ad1947\" (UID: \"d2a34c54-3b76-42e5-b4a3-37a238ad1947\") " Feb 19 10:26:12 crc kubenswrapper[4811]: I0219 10:26:12.836183 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqzk6\" (UniqueName: \"kubernetes.io/projected/d2a34c54-3b76-42e5-b4a3-37a238ad1947-kube-api-access-rqzk6\") pod \"d2a34c54-3b76-42e5-b4a3-37a238ad1947\" (UID: \"d2a34c54-3b76-42e5-b4a3-37a238ad1947\") " Feb 19 10:26:12 crc kubenswrapper[4811]: I0219 10:26:12.837764 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2a34c54-3b76-42e5-b4a3-37a238ad1947-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d2a34c54-3b76-42e5-b4a3-37a238ad1947" (UID: "d2a34c54-3b76-42e5-b4a3-37a238ad1947"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:26:12 crc kubenswrapper[4811]: I0219 10:26:12.846756 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2a34c54-3b76-42e5-b4a3-37a238ad1947-kube-api-access-rqzk6" (OuterVolumeSpecName: "kube-api-access-rqzk6") pod "d2a34c54-3b76-42e5-b4a3-37a238ad1947" (UID: "d2a34c54-3b76-42e5-b4a3-37a238ad1947"). InnerVolumeSpecName "kube-api-access-rqzk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:26:12 crc kubenswrapper[4811]: I0219 10:26:12.937717 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqzk6\" (UniqueName: \"kubernetes.io/projected/d2a34c54-3b76-42e5-b4a3-37a238ad1947-kube-api-access-rqzk6\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:12 crc kubenswrapper[4811]: I0219 10:26:12.937757 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2a34c54-3b76-42e5-b4a3-37a238ad1947-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.141231 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-hwhjb" Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.154978 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gfqfc" Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.193344 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bdcd-account-create-update-x87fb" Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.202486 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e501-account-create-update-trk4v" Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.208918 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-407c-account-create-update-pn24j" Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.344891 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac222fb2-5295-4eb7-8611-f2a7157f6176-operator-scripts\") pod \"ac222fb2-5295-4eb7-8611-f2a7157f6176\" (UID: \"ac222fb2-5295-4eb7-8611-f2a7157f6176\") " Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.345586 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8b6vg\" (UniqueName: \"kubernetes.io/projected/ac222fb2-5295-4eb7-8611-f2a7157f6176-kube-api-access-8b6vg\") pod \"ac222fb2-5295-4eb7-8611-f2a7157f6176\" (UID: \"ac222fb2-5295-4eb7-8611-f2a7157f6176\") " Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.345666 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd758757-2f52-4fe1-b24f-4ca697bf6498-operator-scripts\") pod \"cd758757-2f52-4fe1-b24f-4ca697bf6498\" (UID: \"cd758757-2f52-4fe1-b24f-4ca697bf6498\") " Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.345756 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6dc01af-9510-4de2-a0ae-a841cc7bf815-operator-scripts\") pod \"f6dc01af-9510-4de2-a0ae-a841cc7bf815\" (UID: \"f6dc01af-9510-4de2-a0ae-a841cc7bf815\") " Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.345823 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dspmm\" (UniqueName: \"kubernetes.io/projected/f6dc01af-9510-4de2-a0ae-a841cc7bf815-kube-api-access-dspmm\") pod \"f6dc01af-9510-4de2-a0ae-a841cc7bf815\" (UID: \"f6dc01af-9510-4de2-a0ae-a841cc7bf815\") " Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.345896 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kshh\" (UniqueName: \"kubernetes.io/projected/2001e70d-4b7a-4c4d-8ed3-54cedaea54ea-kube-api-access-9kshh\") pod \"2001e70d-4b7a-4c4d-8ed3-54cedaea54ea\" (UID: \"2001e70d-4b7a-4c4d-8ed3-54cedaea54ea\") " Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.345934 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8853f172-601e-4f67-96d3-5b910eba6c9a-operator-scripts\") pod \"8853f172-601e-4f67-96d3-5b910eba6c9a\" (UID: \"8853f172-601e-4f67-96d3-5b910eba6c9a\") " Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.345988 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vssh\" (UniqueName: \"kubernetes.io/projected/8853f172-601e-4f67-96d3-5b910eba6c9a-kube-api-access-8vssh\") pod \"8853f172-601e-4f67-96d3-5b910eba6c9a\" (UID: \"8853f172-601e-4f67-96d3-5b910eba6c9a\") " Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.346022 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2001e70d-4b7a-4c4d-8ed3-54cedaea54ea-operator-scripts\") pod \"2001e70d-4b7a-4c4d-8ed3-54cedaea54ea\" (UID: \"2001e70d-4b7a-4c4d-8ed3-54cedaea54ea\") " Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.346046 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csbsz\" (UniqueName: \"kubernetes.io/projected/cd758757-2f52-4fe1-b24f-4ca697bf6498-kube-api-access-csbsz\") pod \"cd758757-2f52-4fe1-b24f-4ca697bf6498\" (UID: \"cd758757-2f52-4fe1-b24f-4ca697bf6498\") " Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.346034 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac222fb2-5295-4eb7-8611-f2a7157f6176-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ac222fb2-5295-4eb7-8611-f2a7157f6176" (UID: "ac222fb2-5295-4eb7-8611-f2a7157f6176"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.346848 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac222fb2-5295-4eb7-8611-f2a7157f6176-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.348552 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6dc01af-9510-4de2-a0ae-a841cc7bf815-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f6dc01af-9510-4de2-a0ae-a841cc7bf815" (UID: "f6dc01af-9510-4de2-a0ae-a841cc7bf815"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.349400 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8853f172-601e-4f67-96d3-5b910eba6c9a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8853f172-601e-4f67-96d3-5b910eba6c9a" (UID: "8853f172-601e-4f67-96d3-5b910eba6c9a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.349425 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2001e70d-4b7a-4c4d-8ed3-54cedaea54ea-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2001e70d-4b7a-4c4d-8ed3-54cedaea54ea" (UID: "2001e70d-4b7a-4c4d-8ed3-54cedaea54ea"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.349647 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd758757-2f52-4fe1-b24f-4ca697bf6498-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cd758757-2f52-4fe1-b24f-4ca697bf6498" (UID: "cd758757-2f52-4fe1-b24f-4ca697bf6498"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.350688 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd758757-2f52-4fe1-b24f-4ca697bf6498-kube-api-access-csbsz" (OuterVolumeSpecName: "kube-api-access-csbsz") pod "cd758757-2f52-4fe1-b24f-4ca697bf6498" (UID: "cd758757-2f52-4fe1-b24f-4ca697bf6498"). InnerVolumeSpecName "kube-api-access-csbsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.351884 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gfqfc" Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.352188 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gfqfc" event={"ID":"f6dc01af-9510-4de2-a0ae-a841cc7bf815","Type":"ContainerDied","Data":"60eacaf05dcf7c83744f36940956c617654ea044d88bac8b818237e026ae2eb5"} Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.352236 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60eacaf05dcf7c83744f36940956c617654ea044d88bac8b818237e026ae2eb5" Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.352566 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2001e70d-4b7a-4c4d-8ed3-54cedaea54ea-kube-api-access-9kshh" (OuterVolumeSpecName: "kube-api-access-9kshh") pod "2001e70d-4b7a-4c4d-8ed3-54cedaea54ea" (UID: "2001e70d-4b7a-4c4d-8ed3-54cedaea54ea"). InnerVolumeSpecName "kube-api-access-9kshh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.354975 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-407c-account-create-update-pn24j" event={"ID":"2001e70d-4b7a-4c4d-8ed3-54cedaea54ea","Type":"ContainerDied","Data":"66b07f919175576d7674d4f3ec73a5e49afccbac3b5789d2ea625a7394ec9b24"} Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.355006 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66b07f919175576d7674d4f3ec73a5e49afccbac3b5789d2ea625a7394ec9b24" Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.355043 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6dc01af-9510-4de2-a0ae-a841cc7bf815-kube-api-access-dspmm" (OuterVolumeSpecName: "kube-api-access-dspmm") pod "f6dc01af-9510-4de2-a0ae-a841cc7bf815" (UID: "f6dc01af-9510-4de2-a0ae-a841cc7bf815"). InnerVolumeSpecName "kube-api-access-dspmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.355063 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-407c-account-create-update-pn24j" Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.356057 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac222fb2-5295-4eb7-8611-f2a7157f6176-kube-api-access-8b6vg" (OuterVolumeSpecName: "kube-api-access-8b6vg") pod "ac222fb2-5295-4eb7-8611-f2a7157f6176" (UID: "ac222fb2-5295-4eb7-8611-f2a7157f6176"). InnerVolumeSpecName "kube-api-access-8b6vg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.358295 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bdcd-account-create-update-x87fb" event={"ID":"cd758757-2f52-4fe1-b24f-4ca697bf6498","Type":"ContainerDied","Data":"74326ca1924d8adb0ef2e8a8b0d91e011684b0f7fec4ea176eada8e821830234"} Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.358322 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74326ca1924d8adb0ef2e8a8b0d91e011684b0f7fec4ea176eada8e821830234" Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.358297 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8853f172-601e-4f67-96d3-5b910eba6c9a-kube-api-access-8vssh" (OuterVolumeSpecName: "kube-api-access-8vssh") pod "8853f172-601e-4f67-96d3-5b910eba6c9a" (UID: "8853f172-601e-4f67-96d3-5b910eba6c9a"). InnerVolumeSpecName "kube-api-access-8vssh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.358381 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bdcd-account-create-update-x87fb" Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.368349 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-dsmbw" Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.369122 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-dsmbw" event={"ID":"d2a34c54-3b76-42e5-b4a3-37a238ad1947","Type":"ContainerDied","Data":"ee24ad4fafd7a076bfad40fc0ab6dad7421549fd27cd8c3f49a2bd087c15f2c6"} Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.369174 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee24ad4fafd7a076bfad40fc0ab6dad7421549fd27cd8c3f49a2bd087c15f2c6" Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.371265 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-hwhjb" event={"ID":"8853f172-601e-4f67-96d3-5b910eba6c9a","Type":"ContainerDied","Data":"cc1aacb7a9c58f241d82b4179b63cf99f1cb7641dcc507dfd72ec8d6ba661248"} Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.371282 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc1aacb7a9c58f241d82b4179b63cf99f1cb7641dcc507dfd72ec8d6ba661248" Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.371331 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-hwhjb" Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.375076 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e501-account-create-update-trk4v" Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.383506 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e501-account-create-update-trk4v" event={"ID":"ac222fb2-5295-4eb7-8611-f2a7157f6176","Type":"ContainerDied","Data":"7bca7c02714ef0973ba0e26bfbce2aa5a45b40b0a1eabe5937d932152deb09e5"} Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.383546 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bca7c02714ef0973ba0e26bfbce2aa5a45b40b0a1eabe5937d932152deb09e5" Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.449552 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kshh\" (UniqueName: \"kubernetes.io/projected/2001e70d-4b7a-4c4d-8ed3-54cedaea54ea-kube-api-access-9kshh\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.449611 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8853f172-601e-4f67-96d3-5b910eba6c9a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.449682 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vssh\" (UniqueName: \"kubernetes.io/projected/8853f172-601e-4f67-96d3-5b910eba6c9a-kube-api-access-8vssh\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.449695 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2001e70d-4b7a-4c4d-8ed3-54cedaea54ea-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.449707 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csbsz\" (UniqueName: \"kubernetes.io/projected/cd758757-2f52-4fe1-b24f-4ca697bf6498-kube-api-access-csbsz\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.449717 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8b6vg\" (UniqueName: \"kubernetes.io/projected/ac222fb2-5295-4eb7-8611-f2a7157f6176-kube-api-access-8b6vg\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.449727 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd758757-2f52-4fe1-b24f-4ca697bf6498-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.449737 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6dc01af-9510-4de2-a0ae-a841cc7bf815-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:13 crc kubenswrapper[4811]: I0219 10:26:13.449747 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dspmm\" (UniqueName: \"kubernetes.io/projected/f6dc01af-9510-4de2-a0ae-a841cc7bf815-kube-api-access-dspmm\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:14 crc kubenswrapper[4811]: I0219 10:26:14.978464 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dce922ee-7833-4b43-9358-72aa7da07a03-etc-swift\") pod \"swift-storage-0\" (UID: \"dce922ee-7833-4b43-9358-72aa7da07a03\") " pod="openstack/swift-storage-0" Feb 19 10:26:14 crc kubenswrapper[4811]: E0219 10:26:14.978705 4811 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 10:26:14 crc kubenswrapper[4811]: E0219 10:26:14.979089 4811 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 10:26:14 crc kubenswrapper[4811]: E0219 10:26:14.979187 4811 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dce922ee-7833-4b43-9358-72aa7da07a03-etc-swift podName:dce922ee-7833-4b43-9358-72aa7da07a03 nodeName:}" failed. No retries permitted until 2026-02-19 10:26:22.979157875 +0000 UTC m=+1071.505622561 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/dce922ee-7833-4b43-9358-72aa7da07a03-etc-swift") pod "swift-storage-0" (UID: "dce922ee-7833-4b43-9358-72aa7da07a03") : configmap "swift-ring-files" not found Feb 19 10:26:15 crc kubenswrapper[4811]: I0219 10:26:15.681203 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-m2sl2" Feb 19 10:26:15 crc kubenswrapper[4811]: I0219 10:26:15.793583 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4999e042-333a-4d0b-815e-31da16e22f72-operator-scripts\") pod \"4999e042-333a-4d0b-815e-31da16e22f72\" (UID: \"4999e042-333a-4d0b-815e-31da16e22f72\") " Feb 19 10:26:15 crc kubenswrapper[4811]: I0219 10:26:15.793674 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfmh2\" (UniqueName: \"kubernetes.io/projected/4999e042-333a-4d0b-815e-31da16e22f72-kube-api-access-wfmh2\") pod \"4999e042-333a-4d0b-815e-31da16e22f72\" (UID: \"4999e042-333a-4d0b-815e-31da16e22f72\") " Feb 19 10:26:15 crc kubenswrapper[4811]: I0219 10:26:15.794600 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4999e042-333a-4d0b-815e-31da16e22f72-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4999e042-333a-4d0b-815e-31da16e22f72" (UID: "4999e042-333a-4d0b-815e-31da16e22f72"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:26:15 crc kubenswrapper[4811]: I0219 10:26:15.798936 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4999e042-333a-4d0b-815e-31da16e22f72-kube-api-access-wfmh2" (OuterVolumeSpecName: "kube-api-access-wfmh2") pod "4999e042-333a-4d0b-815e-31da16e22f72" (UID: "4999e042-333a-4d0b-815e-31da16e22f72"). InnerVolumeSpecName "kube-api-access-wfmh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:26:15 crc kubenswrapper[4811]: I0219 10:26:15.895503 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4999e042-333a-4d0b-815e-31da16e22f72-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:15 crc kubenswrapper[4811]: I0219 10:26:15.895548 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfmh2\" (UniqueName: \"kubernetes.io/projected/4999e042-333a-4d0b-815e-31da16e22f72-kube-api-access-wfmh2\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:16 crc kubenswrapper[4811]: I0219 10:26:16.157494 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-7qmw4" Feb 19 10:26:16 crc kubenswrapper[4811]: I0219 10:26:16.212249 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-6js64"] Feb 19 10:26:16 crc kubenswrapper[4811]: I0219 10:26:16.212673 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-6js64" podUID="33876f22-ea5f-424b-8f1a-fb0058bf3384" containerName="dnsmasq-dns" containerID="cri-o://1238ea54d0b03a05ae05ee6a7624c9014e0d6b4152522099169cdd41564d04c3" gracePeriod=10 Feb 19 10:26:16 crc kubenswrapper[4811]: I0219 10:26:16.414333 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-m2sl2" Feb 19 10:26:16 crc kubenswrapper[4811]: I0219 10:26:16.414526 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-m2sl2" event={"ID":"4999e042-333a-4d0b-815e-31da16e22f72","Type":"ContainerDied","Data":"2c40a6711bed86b0a30db1a170eb17f65ff45abac4cad69b7ca649c1c173846d"} Feb 19 10:26:16 crc kubenswrapper[4811]: I0219 10:26:16.414828 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c40a6711bed86b0a30db1a170eb17f65ff45abac4cad69b7ca649c1c173846d" Feb 19 10:26:16 crc kubenswrapper[4811]: I0219 10:26:16.418113 4811 generic.go:334] "Generic (PLEG): container finished" podID="33876f22-ea5f-424b-8f1a-fb0058bf3384" containerID="1238ea54d0b03a05ae05ee6a7624c9014e0d6b4152522099169cdd41564d04c3" exitCode=0 Feb 19 10:26:16 crc kubenswrapper[4811]: I0219 10:26:16.418198 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-6js64" event={"ID":"33876f22-ea5f-424b-8f1a-fb0058bf3384","Type":"ContainerDied","Data":"1238ea54d0b03a05ae05ee6a7624c9014e0d6b4152522099169cdd41564d04c3"} Feb 19 10:26:16 crc kubenswrapper[4811]: I0219 10:26:16.426115 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-r67q5" event={"ID":"30bb0370-e0d8-4aaf-8a07-35ab4af890c6","Type":"ContainerStarted","Data":"b52cbb0a08d47e38b03c05b690b6858215ed0cd49f113a2105d4b85540c739d5"} Feb 19 10:26:16 crc kubenswrapper[4811]: I0219 10:26:16.453275 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-r67q5" podStartSLOduration=2.816204424 podStartE2EDuration="6.453259032s" podCreationTimestamp="2026-02-19 10:26:10 +0000 UTC" firstStartedPulling="2026-02-19 10:26:11.844037093 +0000 UTC m=+1060.370501779" lastFinishedPulling="2026-02-19 10:26:15.481091701 +0000 UTC m=+1064.007556387" observedRunningTime="2026-02-19 10:26:16.448111071 +0000 UTC m=+1064.974575757" watchObservedRunningTime="2026-02-19 10:26:16.453259032 +0000 UTC m=+1064.979723718" Feb 19 10:26:16 crc kubenswrapper[4811]: I0219 10:26:16.712054 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-6js64" Feb 19 10:26:16 crc kubenswrapper[4811]: I0219 10:26:16.812571 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33876f22-ea5f-424b-8f1a-fb0058bf3384-ovsdbserver-nb\") pod \"33876f22-ea5f-424b-8f1a-fb0058bf3384\" (UID: \"33876f22-ea5f-424b-8f1a-fb0058bf3384\") " Feb 19 10:26:16 crc kubenswrapper[4811]: I0219 10:26:16.813165 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33876f22-ea5f-424b-8f1a-fb0058bf3384-dns-svc\") pod \"33876f22-ea5f-424b-8f1a-fb0058bf3384\" (UID: \"33876f22-ea5f-424b-8f1a-fb0058bf3384\") " Feb 19 10:26:16 crc kubenswrapper[4811]: I0219 10:26:16.813222 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33876f22-ea5f-424b-8f1a-fb0058bf3384-ovsdbserver-sb\") pod \"33876f22-ea5f-424b-8f1a-fb0058bf3384\" (UID: \"33876f22-ea5f-424b-8f1a-fb0058bf3384\") " Feb 19 10:26:16 crc kubenswrapper[4811]: I0219 10:26:16.813250 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33876f22-ea5f-424b-8f1a-fb0058bf3384-config\") pod \"33876f22-ea5f-424b-8f1a-fb0058bf3384\" (UID: \"33876f22-ea5f-424b-8f1a-fb0058bf3384\") " Feb 19 10:26:16 crc kubenswrapper[4811]: I0219 10:26:16.813319 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mstrp\" (UniqueName: \"kubernetes.io/projected/33876f22-ea5f-424b-8f1a-fb0058bf3384-kube-api-access-mstrp\") pod \"33876f22-ea5f-424b-8f1a-fb0058bf3384\" (UID: \"33876f22-ea5f-424b-8f1a-fb0058bf3384\") " Feb 19 10:26:16 crc kubenswrapper[4811]: I0219 10:26:16.841626 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33876f22-ea5f-424b-8f1a-fb0058bf3384-kube-api-access-mstrp" (OuterVolumeSpecName: "kube-api-access-mstrp") pod "33876f22-ea5f-424b-8f1a-fb0058bf3384" (UID: "33876f22-ea5f-424b-8f1a-fb0058bf3384"). InnerVolumeSpecName "kube-api-access-mstrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:26:16 crc kubenswrapper[4811]: I0219 10:26:16.858860 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33876f22-ea5f-424b-8f1a-fb0058bf3384-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "33876f22-ea5f-424b-8f1a-fb0058bf3384" (UID: "33876f22-ea5f-424b-8f1a-fb0058bf3384"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:26:16 crc kubenswrapper[4811]: I0219 10:26:16.860153 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33876f22-ea5f-424b-8f1a-fb0058bf3384-config" (OuterVolumeSpecName: "config") pod "33876f22-ea5f-424b-8f1a-fb0058bf3384" (UID: "33876f22-ea5f-424b-8f1a-fb0058bf3384"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:26:16 crc kubenswrapper[4811]: I0219 10:26:16.865038 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33876f22-ea5f-424b-8f1a-fb0058bf3384-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "33876f22-ea5f-424b-8f1a-fb0058bf3384" (UID: "33876f22-ea5f-424b-8f1a-fb0058bf3384"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:26:16 crc kubenswrapper[4811]: I0219 10:26:16.873532 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33876f22-ea5f-424b-8f1a-fb0058bf3384-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "33876f22-ea5f-424b-8f1a-fb0058bf3384" (UID: "33876f22-ea5f-424b-8f1a-fb0058bf3384"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:26:16 crc kubenswrapper[4811]: I0219 10:26:16.916472 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33876f22-ea5f-424b-8f1a-fb0058bf3384-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:16 crc kubenswrapper[4811]: I0219 10:26:16.916742 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mstrp\" (UniqueName: \"kubernetes.io/projected/33876f22-ea5f-424b-8f1a-fb0058bf3384-kube-api-access-mstrp\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:16 crc kubenswrapper[4811]: I0219 10:26:16.916849 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33876f22-ea5f-424b-8f1a-fb0058bf3384-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:16 crc kubenswrapper[4811]: I0219 10:26:16.916941 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33876f22-ea5f-424b-8f1a-fb0058bf3384-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:16 crc kubenswrapper[4811]: I0219 10:26:16.916995 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33876f22-ea5f-424b-8f1a-fb0058bf3384-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:17 crc kubenswrapper[4811]: I0219 10:26:17.442834 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-6js64" Feb 19 10:26:17 crc kubenswrapper[4811]: I0219 10:26:17.442876 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-6js64" event={"ID":"33876f22-ea5f-424b-8f1a-fb0058bf3384","Type":"ContainerDied","Data":"9a195c2c060e64cff3e45ccbf3ca9bdfd594cc11bfa273b489eb22330cffeb85"} Feb 19 10:26:17 crc kubenswrapper[4811]: I0219 10:26:17.442916 4811 scope.go:117] "RemoveContainer" containerID="1238ea54d0b03a05ae05ee6a7624c9014e0d6b4152522099169cdd41564d04c3" Feb 19 10:26:17 crc kubenswrapper[4811]: I0219 10:26:17.480640 4811 scope.go:117] "RemoveContainer" containerID="45d14b4cd54548216e1eea25b511a21708268bacaec4cda5ae12e0ad173b1f3c" Feb 19 10:26:17 crc kubenswrapper[4811]: I0219 10:26:17.507249 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-6js64"] Feb 19 10:26:17 crc kubenswrapper[4811]: I0219 10:26:17.516396 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-6js64"] Feb 19 10:26:18 crc kubenswrapper[4811]: I0219 10:26:18.599873 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33876f22-ea5f-424b-8f1a-fb0058bf3384" path="/var/lib/kubelet/pods/33876f22-ea5f-424b-8f1a-fb0058bf3384/volumes" Feb 19 10:26:18 crc kubenswrapper[4811]: I0219 10:26:18.857066 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 19 10:26:19 crc kubenswrapper[4811]: I0219 10:26:19.035147 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-z4vh6"] Feb 19 10:26:19 crc kubenswrapper[4811]: E0219 10:26:19.035609 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6dc01af-9510-4de2-a0ae-a841cc7bf815" containerName="mariadb-database-create" Feb 19 10:26:19 crc kubenswrapper[4811]: I0219 10:26:19.035634 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6dc01af-9510-4de2-a0ae-a841cc7bf815" containerName="mariadb-database-create" Feb 19 10:26:19 crc kubenswrapper[4811]: E0219 10:26:19.035646 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac222fb2-5295-4eb7-8611-f2a7157f6176" containerName="mariadb-account-create-update" Feb 19 10:26:19 crc kubenswrapper[4811]: I0219 10:26:19.035652 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac222fb2-5295-4eb7-8611-f2a7157f6176" containerName="mariadb-account-create-update" Feb 19 10:26:19 crc kubenswrapper[4811]: E0219 10:26:19.035665 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2001e70d-4b7a-4c4d-8ed3-54cedaea54ea" containerName="mariadb-account-create-update" Feb 19 10:26:19 crc kubenswrapper[4811]: I0219 10:26:19.035673 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="2001e70d-4b7a-4c4d-8ed3-54cedaea54ea" containerName="mariadb-account-create-update" Feb 19 10:26:19 crc kubenswrapper[4811]: E0219 10:26:19.035681 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33876f22-ea5f-424b-8f1a-fb0058bf3384" containerName="dnsmasq-dns" Feb 19 10:26:19 crc kubenswrapper[4811]: I0219 10:26:19.035687 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="33876f22-ea5f-424b-8f1a-fb0058bf3384" containerName="dnsmasq-dns" Feb 19 10:26:19 crc kubenswrapper[4811]: E0219 10:26:19.035695 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8853f172-601e-4f67-96d3-5b910eba6c9a" containerName="mariadb-database-create" Feb 19 10:26:19 crc kubenswrapper[4811]: I0219 10:26:19.035701 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="8853f172-601e-4f67-96d3-5b910eba6c9a" containerName="mariadb-database-create" Feb 19 10:26:19 crc kubenswrapper[4811]: E0219 10:26:19.035717 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd758757-2f52-4fe1-b24f-4ca697bf6498" containerName="mariadb-account-create-update" Feb 19 10:26:19 crc kubenswrapper[4811]: I0219 10:26:19.035724 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd758757-2f52-4fe1-b24f-4ca697bf6498" containerName="mariadb-account-create-update" Feb 19 10:26:19 crc kubenswrapper[4811]: E0219 10:26:19.035730 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2a34c54-3b76-42e5-b4a3-37a238ad1947" containerName="mariadb-database-create" Feb 19 10:26:19 crc kubenswrapper[4811]: I0219 10:26:19.035736 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2a34c54-3b76-42e5-b4a3-37a238ad1947" containerName="mariadb-database-create" Feb 19 10:26:19 crc kubenswrapper[4811]: E0219 10:26:19.035754 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33876f22-ea5f-424b-8f1a-fb0058bf3384" containerName="init" Feb 19 10:26:19 crc kubenswrapper[4811]: I0219 10:26:19.035759 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="33876f22-ea5f-424b-8f1a-fb0058bf3384" containerName="init" Feb 19 10:26:19 crc kubenswrapper[4811]: E0219 10:26:19.035767 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4999e042-333a-4d0b-815e-31da16e22f72" containerName="mariadb-account-create-update" Feb 19 10:26:19 crc kubenswrapper[4811]: I0219 10:26:19.035773 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4999e042-333a-4d0b-815e-31da16e22f72" containerName="mariadb-account-create-update" Feb 19 10:26:19 crc kubenswrapper[4811]: I0219 10:26:19.035942 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd758757-2f52-4fe1-b24f-4ca697bf6498" containerName="mariadb-account-create-update" Feb 19 10:26:19 crc kubenswrapper[4811]: I0219 10:26:19.035958 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="2001e70d-4b7a-4c4d-8ed3-54cedaea54ea" containerName="mariadb-account-create-update" Feb 19 10:26:19 crc kubenswrapper[4811]: I0219 10:26:19.035967 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="33876f22-ea5f-424b-8f1a-fb0058bf3384" containerName="dnsmasq-dns" Feb 19 10:26:19 crc kubenswrapper[4811]: I0219 10:26:19.035975 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac222fb2-5295-4eb7-8611-f2a7157f6176" containerName="mariadb-account-create-update" Feb 19 10:26:19 crc kubenswrapper[4811]: I0219 10:26:19.035982 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="4999e042-333a-4d0b-815e-31da16e22f72" containerName="mariadb-account-create-update" Feb 19 10:26:19 crc kubenswrapper[4811]: I0219 10:26:19.035992 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2a34c54-3b76-42e5-b4a3-37a238ad1947" containerName="mariadb-database-create" Feb 19 10:26:19 crc kubenswrapper[4811]: I0219 10:26:19.036000 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6dc01af-9510-4de2-a0ae-a841cc7bf815" containerName="mariadb-database-create" Feb 19 10:26:19 crc kubenswrapper[4811]: I0219 10:26:19.036012 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="8853f172-601e-4f67-96d3-5b910eba6c9a" containerName="mariadb-database-create" Feb 19 10:26:19 crc kubenswrapper[4811]: I0219 10:26:19.036544 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-z4vh6" Feb 19 10:26:19 crc kubenswrapper[4811]: I0219 10:26:19.039424 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-gsjnn" Feb 19 10:26:19 crc kubenswrapper[4811]: I0219 10:26:19.039720 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 19 10:26:19 crc kubenswrapper[4811]: I0219 10:26:19.050994 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-z4vh6"] Feb 19 10:26:19 crc kubenswrapper[4811]: I0219 10:26:19.092668 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9133229-76e0-4848-b61e-f70c53216929-config-data\") pod \"glance-db-sync-z4vh6\" (UID: \"a9133229-76e0-4848-b61e-f70c53216929\") " pod="openstack/glance-db-sync-z4vh6" Feb 19 10:26:19 crc kubenswrapper[4811]: I0219 10:26:19.092733 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9133229-76e0-4848-b61e-f70c53216929-combined-ca-bundle\") pod \"glance-db-sync-z4vh6\" (UID: \"a9133229-76e0-4848-b61e-f70c53216929\") " pod="openstack/glance-db-sync-z4vh6" Feb 19 10:26:19 crc kubenswrapper[4811]: I0219 10:26:19.092783 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mmnn\" (UniqueName: \"kubernetes.io/projected/a9133229-76e0-4848-b61e-f70c53216929-kube-api-access-7mmnn\") pod \"glance-db-sync-z4vh6\" (UID: \"a9133229-76e0-4848-b61e-f70c53216929\") " pod="openstack/glance-db-sync-z4vh6" Feb 19 10:26:19 crc kubenswrapper[4811]: I0219 10:26:19.092851 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a9133229-76e0-4848-b61e-f70c53216929-db-sync-config-data\") pod \"glance-db-sync-z4vh6\" (UID: \"a9133229-76e0-4848-b61e-f70c53216929\") " pod="openstack/glance-db-sync-z4vh6" Feb 19 10:26:19 crc kubenswrapper[4811]: I0219 10:26:19.193525 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a9133229-76e0-4848-b61e-f70c53216929-db-sync-config-data\") pod \"glance-db-sync-z4vh6\" (UID: \"a9133229-76e0-4848-b61e-f70c53216929\") " pod="openstack/glance-db-sync-z4vh6" Feb 19 10:26:19 crc kubenswrapper[4811]: I0219 10:26:19.193641 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9133229-76e0-4848-b61e-f70c53216929-config-data\") pod \"glance-db-sync-z4vh6\" (UID: \"a9133229-76e0-4848-b61e-f70c53216929\") " pod="openstack/glance-db-sync-z4vh6" Feb 19 10:26:19 crc kubenswrapper[4811]: I0219 10:26:19.193717 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9133229-76e0-4848-b61e-f70c53216929-combined-ca-bundle\") pod \"glance-db-sync-z4vh6\" (UID: \"a9133229-76e0-4848-b61e-f70c53216929\") " pod="openstack/glance-db-sync-z4vh6" Feb 19 10:26:19 crc kubenswrapper[4811]: I0219 10:26:19.193777 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mmnn\" (UniqueName: \"kubernetes.io/projected/a9133229-76e0-4848-b61e-f70c53216929-kube-api-access-7mmnn\") pod \"glance-db-sync-z4vh6\" (UID: \"a9133229-76e0-4848-b61e-f70c53216929\") " pod="openstack/glance-db-sync-z4vh6" Feb 19 10:26:19 crc kubenswrapper[4811]: I0219 10:26:19.202128 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a9133229-76e0-4848-b61e-f70c53216929-db-sync-config-data\") pod \"glance-db-sync-z4vh6\" (UID: \"a9133229-76e0-4848-b61e-f70c53216929\") " pod="openstack/glance-db-sync-z4vh6" Feb 19 10:26:19 crc kubenswrapper[4811]: I0219 10:26:19.207086 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9133229-76e0-4848-b61e-f70c53216929-combined-ca-bundle\") pod \"glance-db-sync-z4vh6\" (UID: \"a9133229-76e0-4848-b61e-f70c53216929\") " pod="openstack/glance-db-sync-z4vh6" Feb 19 10:26:19 crc kubenswrapper[4811]: I0219 10:26:19.221224 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9133229-76e0-4848-b61e-f70c53216929-config-data\") pod \"glance-db-sync-z4vh6\" (UID: \"a9133229-76e0-4848-b61e-f70c53216929\") " pod="openstack/glance-db-sync-z4vh6" Feb 19 10:26:19 crc kubenswrapper[4811]: I0219 10:26:19.241339 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mmnn\" (UniqueName: \"kubernetes.io/projected/a9133229-76e0-4848-b61e-f70c53216929-kube-api-access-7mmnn\") pod \"glance-db-sync-z4vh6\" (UID: \"a9133229-76e0-4848-b61e-f70c53216929\") " pod="openstack/glance-db-sync-z4vh6" Feb 19 10:26:19 crc kubenswrapper[4811]: I0219 10:26:19.404382 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-z4vh6" Feb 19 10:26:19 crc kubenswrapper[4811]: I0219 10:26:19.913237 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-z4vh6"] Feb 19 10:26:20 crc kubenswrapper[4811]: I0219 10:26:20.466793 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-z4vh6" event={"ID":"a9133229-76e0-4848-b61e-f70c53216929","Type":"ContainerStarted","Data":"9a4cf576dba88b3d5b3a2030533dae79ca5226d41ccb3a220006af6dfcde099e"} Feb 19 10:26:21 crc kubenswrapper[4811]: I0219 10:26:21.702792 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-m2sl2"] Feb 19 10:26:21 crc kubenswrapper[4811]: I0219 10:26:21.708721 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-m2sl2"] Feb 19 10:26:22 crc kubenswrapper[4811]: I0219 10:26:22.612281 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4999e042-333a-4d0b-815e-31da16e22f72" path="/var/lib/kubelet/pods/4999e042-333a-4d0b-815e-31da16e22f72/volumes" Feb 19 10:26:23 crc kubenswrapper[4811]: I0219 10:26:23.069437 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dce922ee-7833-4b43-9358-72aa7da07a03-etc-swift\") pod \"swift-storage-0\" (UID: \"dce922ee-7833-4b43-9358-72aa7da07a03\") " pod="openstack/swift-storage-0" Feb 19 10:26:23 crc kubenswrapper[4811]: I0219 10:26:23.093262 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dce922ee-7833-4b43-9358-72aa7da07a03-etc-swift\") pod \"swift-storage-0\" (UID: \"dce922ee-7833-4b43-9358-72aa7da07a03\") " pod="openstack/swift-storage-0" Feb 19 10:26:23 crc kubenswrapper[4811]: I0219 10:26:23.254248 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 19 10:26:23 crc kubenswrapper[4811]: I0219 10:26:23.494912 4811 generic.go:334] "Generic (PLEG): container finished" podID="30bb0370-e0d8-4aaf-8a07-35ab4af890c6" containerID="b52cbb0a08d47e38b03c05b690b6858215ed0cd49f113a2105d4b85540c739d5" exitCode=0 Feb 19 10:26:23 crc kubenswrapper[4811]: I0219 10:26:23.494971 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-r67q5" event={"ID":"30bb0370-e0d8-4aaf-8a07-35ab4af890c6","Type":"ContainerDied","Data":"b52cbb0a08d47e38b03c05b690b6858215ed0cd49f113a2105d4b85540c739d5"} Feb 19 10:26:23 crc kubenswrapper[4811]: I0219 10:26:23.777405 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 19 10:26:23 crc kubenswrapper[4811]: W0219 10:26:23.780711 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddce922ee_7833_4b43_9358_72aa7da07a03.slice/crio-8da030931bf2dabecf828c0f9bc2d4695afbe0b822867906b1c09f207c06fd36 WatchSource:0}: Error finding container 8da030931bf2dabecf828c0f9bc2d4695afbe0b822867906b1c09f207c06fd36: Status 404 returned error can't find the container with id 8da030931bf2dabecf828c0f9bc2d4695afbe0b822867906b1c09f207c06fd36 Feb 19 10:26:24 crc kubenswrapper[4811]: I0219 10:26:24.504507 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dce922ee-7833-4b43-9358-72aa7da07a03","Type":"ContainerStarted","Data":"8da030931bf2dabecf828c0f9bc2d4695afbe0b822867906b1c09f207c06fd36"} Feb 19 10:26:24 crc kubenswrapper[4811]: I0219 10:26:24.590929 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-mvhxm" podUID="00337a1f-39d0-4583-b35b-a50aeb572f60" containerName="ovn-controller" probeResult="failure" output=< Feb 19 10:26:24 crc kubenswrapper[4811]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 19 10:26:24 crc kubenswrapper[4811]: > Feb 19 10:26:24 crc kubenswrapper[4811]: I0219 10:26:24.848482 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-r67q5" Feb 19 10:26:25 crc kubenswrapper[4811]: I0219 10:26:25.040526 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-scripts\") pod \"30bb0370-e0d8-4aaf-8a07-35ab4af890c6\" (UID: \"30bb0370-e0d8-4aaf-8a07-35ab4af890c6\") " Feb 19 10:26:25 crc kubenswrapper[4811]: I0219 10:26:25.040605 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-etc-swift\") pod \"30bb0370-e0d8-4aaf-8a07-35ab4af890c6\" (UID: \"30bb0370-e0d8-4aaf-8a07-35ab4af890c6\") " Feb 19 10:26:25 crc kubenswrapper[4811]: I0219 10:26:25.040792 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rljw9\" (UniqueName: \"kubernetes.io/projected/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-kube-api-access-rljw9\") pod \"30bb0370-e0d8-4aaf-8a07-35ab4af890c6\" (UID: \"30bb0370-e0d8-4aaf-8a07-35ab4af890c6\") " Feb 19 10:26:25 crc kubenswrapper[4811]: I0219 10:26:25.040821 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-dispersionconf\") pod \"30bb0370-e0d8-4aaf-8a07-35ab4af890c6\" (UID: \"30bb0370-e0d8-4aaf-8a07-35ab4af890c6\") " Feb 19 10:26:25 crc kubenswrapper[4811]: I0219 10:26:25.040869 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-ring-data-devices\") pod \"30bb0370-e0d8-4aaf-8a07-35ab4af890c6\" (UID: \"30bb0370-e0d8-4aaf-8a07-35ab4af890c6\") " Feb 19 10:26:25 crc kubenswrapper[4811]: I0219 10:26:25.040894 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-swiftconf\") pod \"30bb0370-e0d8-4aaf-8a07-35ab4af890c6\" (UID: \"30bb0370-e0d8-4aaf-8a07-35ab4af890c6\") " Feb 19 10:26:25 crc kubenswrapper[4811]: I0219 10:26:25.040922 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-combined-ca-bundle\") pod \"30bb0370-e0d8-4aaf-8a07-35ab4af890c6\" (UID: \"30bb0370-e0d8-4aaf-8a07-35ab4af890c6\") " Feb 19 10:26:25 crc kubenswrapper[4811]: I0219 10:26:25.041929 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "30bb0370-e0d8-4aaf-8a07-35ab4af890c6" (UID: "30bb0370-e0d8-4aaf-8a07-35ab4af890c6"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:26:25 crc kubenswrapper[4811]: I0219 10:26:25.044275 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "30bb0370-e0d8-4aaf-8a07-35ab4af890c6" (UID: "30bb0370-e0d8-4aaf-8a07-35ab4af890c6"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:26:25 crc kubenswrapper[4811]: I0219 10:26:25.049975 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-kube-api-access-rljw9" (OuterVolumeSpecName: "kube-api-access-rljw9") pod "30bb0370-e0d8-4aaf-8a07-35ab4af890c6" (UID: "30bb0370-e0d8-4aaf-8a07-35ab4af890c6"). InnerVolumeSpecName "kube-api-access-rljw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:26:25 crc kubenswrapper[4811]: I0219 10:26:25.054887 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "30bb0370-e0d8-4aaf-8a07-35ab4af890c6" (UID: "30bb0370-e0d8-4aaf-8a07-35ab4af890c6"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:26:25 crc kubenswrapper[4811]: I0219 10:26:25.068634 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-scripts" (OuterVolumeSpecName: "scripts") pod "30bb0370-e0d8-4aaf-8a07-35ab4af890c6" (UID: "30bb0370-e0d8-4aaf-8a07-35ab4af890c6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:26:25 crc kubenswrapper[4811]: I0219 10:26:25.079336 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30bb0370-e0d8-4aaf-8a07-35ab4af890c6" (UID: "30bb0370-e0d8-4aaf-8a07-35ab4af890c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:26:25 crc kubenswrapper[4811]: I0219 10:26:25.080437 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "30bb0370-e0d8-4aaf-8a07-35ab4af890c6" (UID: "30bb0370-e0d8-4aaf-8a07-35ab4af890c6"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:26:25 crc kubenswrapper[4811]: I0219 10:26:25.144079 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rljw9\" (UniqueName: \"kubernetes.io/projected/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-kube-api-access-rljw9\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:25 crc kubenswrapper[4811]: I0219 10:26:25.144355 4811 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:25 crc kubenswrapper[4811]: I0219 10:26:25.144439 4811 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:25 crc kubenswrapper[4811]: I0219 10:26:25.144537 4811 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:25 crc kubenswrapper[4811]: I0219 10:26:25.144628 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:25 crc kubenswrapper[4811]: I0219 10:26:25.144887 4811 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:25 crc kubenswrapper[4811]: I0219 10:26:25.144973 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30bb0370-e0d8-4aaf-8a07-35ab4af890c6-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:25 crc kubenswrapper[4811]: I0219 10:26:25.517295 4811 generic.go:334] "Generic (PLEG): container finished" podID="f263c1a8-dfa3-4383-849c-bde314ae5503" containerID="f3e9158665e56adf6d6316c3aa8190c955795799012ad86edea35e82b7ad74e8" exitCode=0 Feb 19 10:26:25 crc kubenswrapper[4811]: I0219 10:26:25.517385 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f263c1a8-dfa3-4383-849c-bde314ae5503","Type":"ContainerDied","Data":"f3e9158665e56adf6d6316c3aa8190c955795799012ad86edea35e82b7ad74e8"} Feb 19 10:26:25 crc kubenswrapper[4811]: I0219 10:26:25.520051 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-r67q5" event={"ID":"30bb0370-e0d8-4aaf-8a07-35ab4af890c6","Type":"ContainerDied","Data":"c7913432578cd9e1fbc3656ee35245fec52f011b89e1896f569d5e011525bb7a"} Feb 19 10:26:25 crc kubenswrapper[4811]: I0219 10:26:25.520104 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7913432578cd9e1fbc3656ee35245fec52f011b89e1896f569d5e011525bb7a" Feb 19 10:26:25 crc kubenswrapper[4811]: I0219 10:26:25.520184 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-r67q5" Feb 19 10:26:26 crc kubenswrapper[4811]: I0219 10:26:26.528776 4811 generic.go:334] "Generic (PLEG): container finished" podID="d7378877-24bd-40fc-89c8-6bc6115040a2" containerID="d9655df4b91bafa3e771ef345d5f6edc51398eec9e08ccdc8f1eec6c229e5dc4" exitCode=0 Feb 19 10:26:26 crc kubenswrapper[4811]: I0219 10:26:26.529375 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d7378877-24bd-40fc-89c8-6bc6115040a2","Type":"ContainerDied","Data":"d9655df4b91bafa3e771ef345d5f6edc51398eec9e08ccdc8f1eec6c229e5dc4"} Feb 19 10:26:26 crc kubenswrapper[4811]: I0219 10:26:26.735916 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-6ctsp"] Feb 19 10:26:26 crc kubenswrapper[4811]: E0219 10:26:26.736248 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30bb0370-e0d8-4aaf-8a07-35ab4af890c6" containerName="swift-ring-rebalance" Feb 19 10:26:26 crc kubenswrapper[4811]: I0219 10:26:26.736266 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="30bb0370-e0d8-4aaf-8a07-35ab4af890c6" containerName="swift-ring-rebalance" Feb 19 10:26:26 crc kubenswrapper[4811]: I0219 10:26:26.736465 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="30bb0370-e0d8-4aaf-8a07-35ab4af890c6" containerName="swift-ring-rebalance" Feb 19 10:26:26 crc kubenswrapper[4811]: I0219 10:26:26.737051 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6ctsp" Feb 19 10:26:26 crc kubenswrapper[4811]: I0219 10:26:26.739970 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 19 10:26:26 crc kubenswrapper[4811]: I0219 10:26:26.746724 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6ctsp"] Feb 19 10:26:26 crc kubenswrapper[4811]: I0219 10:26:26.889003 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2l2x\" (UniqueName: \"kubernetes.io/projected/c18c4186-9fb1-448f-91d4-1cb18a5cc172-kube-api-access-w2l2x\") pod \"root-account-create-update-6ctsp\" (UID: \"c18c4186-9fb1-448f-91d4-1cb18a5cc172\") " pod="openstack/root-account-create-update-6ctsp" Feb 19 10:26:26 crc kubenswrapper[4811]: I0219 10:26:26.889136 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c18c4186-9fb1-448f-91d4-1cb18a5cc172-operator-scripts\") pod \"root-account-create-update-6ctsp\" (UID: \"c18c4186-9fb1-448f-91d4-1cb18a5cc172\") " pod="openstack/root-account-create-update-6ctsp" Feb 19 10:26:26 crc kubenswrapper[4811]: I0219 10:26:26.991880 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c18c4186-9fb1-448f-91d4-1cb18a5cc172-operator-scripts\") pod \"root-account-create-update-6ctsp\" (UID: \"c18c4186-9fb1-448f-91d4-1cb18a5cc172\") " pod="openstack/root-account-create-update-6ctsp" Feb 19 10:26:26 crc kubenswrapper[4811]: I0219 10:26:26.992064 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2l2x\" (UniqueName: \"kubernetes.io/projected/c18c4186-9fb1-448f-91d4-1cb18a5cc172-kube-api-access-w2l2x\") pod \"root-account-create-update-6ctsp\" (UID: \"c18c4186-9fb1-448f-91d4-1cb18a5cc172\") " pod="openstack/root-account-create-update-6ctsp" Feb 19 10:26:26 crc kubenswrapper[4811]: I0219 10:26:26.993014 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c18c4186-9fb1-448f-91d4-1cb18a5cc172-operator-scripts\") pod \"root-account-create-update-6ctsp\" (UID: \"c18c4186-9fb1-448f-91d4-1cb18a5cc172\") " pod="openstack/root-account-create-update-6ctsp" Feb 19 10:26:27 crc kubenswrapper[4811]: I0219 10:26:27.015011 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2l2x\" (UniqueName: \"kubernetes.io/projected/c18c4186-9fb1-448f-91d4-1cb18a5cc172-kube-api-access-w2l2x\") pod \"root-account-create-update-6ctsp\" (UID: \"c18c4186-9fb1-448f-91d4-1cb18a5cc172\") " pod="openstack/root-account-create-update-6ctsp" Feb 19 10:26:27 crc kubenswrapper[4811]: I0219 10:26:27.070720 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6ctsp" Feb 19 10:26:27 crc kubenswrapper[4811]: I0219 10:26:27.542837 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d7378877-24bd-40fc-89c8-6bc6115040a2","Type":"ContainerStarted","Data":"3aa5b3ed12c1474392602f778fb91ebade5465a969eaff3e1726fba5007fa711"} Feb 19 10:26:27 crc kubenswrapper[4811]: I0219 10:26:27.544433 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:26:27 crc kubenswrapper[4811]: I0219 10:26:27.552166 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f263c1a8-dfa3-4383-849c-bde314ae5503","Type":"ContainerStarted","Data":"06aff5506c30621032dd59bbf567a788f7754e5d787a946942c8f30810ae381b"} Feb 19 10:26:27 crc kubenswrapper[4811]: I0219 10:26:27.552504 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 19 10:26:27 crc kubenswrapper[4811]: I0219 10:26:27.601041 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=50.97393122 podStartE2EDuration="58.601022171s" podCreationTimestamp="2026-02-19 10:25:29 +0000 UTC" firstStartedPulling="2026-02-19 10:25:42.939425025 +0000 UTC m=+1031.465889711" lastFinishedPulling="2026-02-19 10:25:50.566515976 +0000 UTC m=+1039.092980662" observedRunningTime="2026-02-19 10:26:27.568356758 +0000 UTC m=+1076.094821454" watchObservedRunningTime="2026-02-19 10:26:27.601022171 +0000 UTC m=+1076.127486857" Feb 19 10:26:27 crc kubenswrapper[4811]: I0219 10:26:27.604804 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6ctsp"] Feb 19 10:26:27 crc kubenswrapper[4811]: I0219 10:26:27.605315 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=51.347848908 podStartE2EDuration="59.6053035s" podCreationTimestamp="2026-02-19 10:25:28 +0000 UTC" firstStartedPulling="2026-02-19 10:25:42.625366173 +0000 UTC m=+1031.151830859" lastFinishedPulling="2026-02-19 10:25:50.882820775 +0000 UTC m=+1039.409285451" observedRunningTime="2026-02-19 10:26:27.593560771 +0000 UTC m=+1076.120025457" watchObservedRunningTime="2026-02-19 10:26:27.6053035 +0000 UTC m=+1076.131768186" Feb 19 10:26:29 crc kubenswrapper[4811]: I0219 10:26:29.625117 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-mvhxm" podUID="00337a1f-39d0-4583-b35b-a50aeb572f60" containerName="ovn-controller" probeResult="failure" output=< Feb 19 10:26:29 crc kubenswrapper[4811]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 19 10:26:29 crc kubenswrapper[4811]: > Feb 19 10:26:29 crc kubenswrapper[4811]: I0219 10:26:29.661848 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-bddvk" Feb 19 10:26:29 crc kubenswrapper[4811]: I0219 10:26:29.663643 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-bddvk" Feb 19 10:26:29 crc kubenswrapper[4811]: I0219 10:26:29.906276 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-mvhxm-config-tp66b"] Feb 19 10:26:29 crc kubenswrapper[4811]: I0219 10:26:29.908058 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mvhxm-config-tp66b" Feb 19 10:26:29 crc kubenswrapper[4811]: I0219 10:26:29.911344 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 19 10:26:29 crc kubenswrapper[4811]: I0219 10:26:29.916669 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k627d\" (UniqueName: \"kubernetes.io/projected/7cc320aa-b391-42ce-a0ad-83fc5ccee6c4-kube-api-access-k627d\") pod \"ovn-controller-mvhxm-config-tp66b\" (UID: \"7cc320aa-b391-42ce-a0ad-83fc5ccee6c4\") " pod="openstack/ovn-controller-mvhxm-config-tp66b" Feb 19 10:26:29 crc kubenswrapper[4811]: I0219 10:26:29.916731 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7cc320aa-b391-42ce-a0ad-83fc5ccee6c4-var-run\") pod \"ovn-controller-mvhxm-config-tp66b\" (UID: \"7cc320aa-b391-42ce-a0ad-83fc5ccee6c4\") " pod="openstack/ovn-controller-mvhxm-config-tp66b" Feb 19 10:26:29 crc kubenswrapper[4811]: I0219 10:26:29.916783 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7cc320aa-b391-42ce-a0ad-83fc5ccee6c4-var-run-ovn\") pod \"ovn-controller-mvhxm-config-tp66b\" (UID: \"7cc320aa-b391-42ce-a0ad-83fc5ccee6c4\") " pod="openstack/ovn-controller-mvhxm-config-tp66b" Feb 19 10:26:29 crc kubenswrapper[4811]: I0219 10:26:29.916810 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7cc320aa-b391-42ce-a0ad-83fc5ccee6c4-additional-scripts\") pod \"ovn-controller-mvhxm-config-tp66b\" (UID: \"7cc320aa-b391-42ce-a0ad-83fc5ccee6c4\") " pod="openstack/ovn-controller-mvhxm-config-tp66b" Feb 19 10:26:29 crc kubenswrapper[4811]: I0219 10:26:29.916880 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7cc320aa-b391-42ce-a0ad-83fc5ccee6c4-var-log-ovn\") pod \"ovn-controller-mvhxm-config-tp66b\" (UID: \"7cc320aa-b391-42ce-a0ad-83fc5ccee6c4\") " pod="openstack/ovn-controller-mvhxm-config-tp66b" Feb 19 10:26:29 crc kubenswrapper[4811]: I0219 10:26:29.916938 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7cc320aa-b391-42ce-a0ad-83fc5ccee6c4-scripts\") pod \"ovn-controller-mvhxm-config-tp66b\" (UID: \"7cc320aa-b391-42ce-a0ad-83fc5ccee6c4\") " pod="openstack/ovn-controller-mvhxm-config-tp66b" Feb 19 10:26:29 crc kubenswrapper[4811]: I0219 10:26:29.920291 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mvhxm-config-tp66b"] Feb 19 10:26:30 crc kubenswrapper[4811]: I0219 10:26:30.018137 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7cc320aa-b391-42ce-a0ad-83fc5ccee6c4-scripts\") pod \"ovn-controller-mvhxm-config-tp66b\" (UID: \"7cc320aa-b391-42ce-a0ad-83fc5ccee6c4\") " pod="openstack/ovn-controller-mvhxm-config-tp66b" Feb 19 10:26:30 crc kubenswrapper[4811]: I0219 10:26:30.018226 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k627d\" (UniqueName: \"kubernetes.io/projected/7cc320aa-b391-42ce-a0ad-83fc5ccee6c4-kube-api-access-k627d\") pod \"ovn-controller-mvhxm-config-tp66b\" (UID: \"7cc320aa-b391-42ce-a0ad-83fc5ccee6c4\") " pod="openstack/ovn-controller-mvhxm-config-tp66b" Feb 19 10:26:30 crc kubenswrapper[4811]: I0219 10:26:30.018290 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7cc320aa-b391-42ce-a0ad-83fc5ccee6c4-var-run\") pod \"ovn-controller-mvhxm-config-tp66b\" (UID: \"7cc320aa-b391-42ce-a0ad-83fc5ccee6c4\") " pod="openstack/ovn-controller-mvhxm-config-tp66b" Feb 19 10:26:30 crc kubenswrapper[4811]: I0219 10:26:30.018358 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7cc320aa-b391-42ce-a0ad-83fc5ccee6c4-var-run-ovn\") pod \"ovn-controller-mvhxm-config-tp66b\" (UID: \"7cc320aa-b391-42ce-a0ad-83fc5ccee6c4\") " pod="openstack/ovn-controller-mvhxm-config-tp66b" Feb 19 10:26:30 crc kubenswrapper[4811]: I0219 10:26:30.018390 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7cc320aa-b391-42ce-a0ad-83fc5ccee6c4-additional-scripts\") pod \"ovn-controller-mvhxm-config-tp66b\" (UID: \"7cc320aa-b391-42ce-a0ad-83fc5ccee6c4\") " pod="openstack/ovn-controller-mvhxm-config-tp66b" Feb 19 10:26:30 crc kubenswrapper[4811]: I0219 10:26:30.018512 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7cc320aa-b391-42ce-a0ad-83fc5ccee6c4-var-log-ovn\") pod \"ovn-controller-mvhxm-config-tp66b\" (UID: \"7cc320aa-b391-42ce-a0ad-83fc5ccee6c4\") " pod="openstack/ovn-controller-mvhxm-config-tp66b" Feb 19 10:26:30 crc kubenswrapper[4811]: I0219 10:26:30.018931 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7cc320aa-b391-42ce-a0ad-83fc5ccee6c4-var-log-ovn\") pod \"ovn-controller-mvhxm-config-tp66b\" (UID: \"7cc320aa-b391-42ce-a0ad-83fc5ccee6c4\") " pod="openstack/ovn-controller-mvhxm-config-tp66b" Feb 19 10:26:30 crc kubenswrapper[4811]: I0219 10:26:30.019172 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7cc320aa-b391-42ce-a0ad-83fc5ccee6c4-var-run-ovn\") pod \"ovn-controller-mvhxm-config-tp66b\" (UID: \"7cc320aa-b391-42ce-a0ad-83fc5ccee6c4\") " pod="openstack/ovn-controller-mvhxm-config-tp66b" Feb 19 10:26:30 crc kubenswrapper[4811]: I0219 10:26:30.020232 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7cc320aa-b391-42ce-a0ad-83fc5ccee6c4-additional-scripts\") pod \"ovn-controller-mvhxm-config-tp66b\" (UID: \"7cc320aa-b391-42ce-a0ad-83fc5ccee6c4\") " pod="openstack/ovn-controller-mvhxm-config-tp66b" Feb 19 10:26:30 crc kubenswrapper[4811]: I0219 10:26:30.021353 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7cc320aa-b391-42ce-a0ad-83fc5ccee6c4-scripts\") pod \"ovn-controller-mvhxm-config-tp66b\" (UID: \"7cc320aa-b391-42ce-a0ad-83fc5ccee6c4\") " pod="openstack/ovn-controller-mvhxm-config-tp66b" Feb 19 10:26:30 crc kubenswrapper[4811]: I0219 10:26:30.021614 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7cc320aa-b391-42ce-a0ad-83fc5ccee6c4-var-run\") pod \"ovn-controller-mvhxm-config-tp66b\" (UID: \"7cc320aa-b391-42ce-a0ad-83fc5ccee6c4\") " pod="openstack/ovn-controller-mvhxm-config-tp66b" Feb 19 10:26:30 crc kubenswrapper[4811]: I0219 10:26:30.042885 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k627d\" (UniqueName: \"kubernetes.io/projected/7cc320aa-b391-42ce-a0ad-83fc5ccee6c4-kube-api-access-k627d\") pod \"ovn-controller-mvhxm-config-tp66b\" (UID: \"7cc320aa-b391-42ce-a0ad-83fc5ccee6c4\") " pod="openstack/ovn-controller-mvhxm-config-tp66b" Feb 19 10:26:30 crc kubenswrapper[4811]: I0219 10:26:30.229768 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mvhxm-config-tp66b" Feb 19 10:26:34 crc kubenswrapper[4811]: I0219 10:26:34.581783 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-mvhxm" podUID="00337a1f-39d0-4583-b35b-a50aeb572f60" containerName="ovn-controller" probeResult="failure" output=< Feb 19 10:26:34 crc kubenswrapper[4811]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 19 10:26:34 crc kubenswrapper[4811]: > Feb 19 10:26:34 crc kubenswrapper[4811]: E0219 10:26:34.582957 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Feb 19 10:26:34 crc kubenswrapper[4811]: E0219 10:26:34.583115 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7mmnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-z4vh6_openstack(a9133229-76e0-4848-b61e-f70c53216929): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 10:26:34 crc kubenswrapper[4811]: E0219 10:26:34.584343 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-z4vh6" podUID="a9133229-76e0-4848-b61e-f70c53216929" Feb 19 10:26:34 crc kubenswrapper[4811]: E0219 10:26:34.690511 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-z4vh6" podUID="a9133229-76e0-4848-b61e-f70c53216929" Feb 19 10:26:34 crc kubenswrapper[4811]: I0219 10:26:34.817363 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 19 10:26:35 crc kubenswrapper[4811]: I0219 10:26:35.282005 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mvhxm-config-tp66b"] Feb 19 10:26:35 crc kubenswrapper[4811]: I0219 10:26:35.696643 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mvhxm-config-tp66b" event={"ID":"7cc320aa-b391-42ce-a0ad-83fc5ccee6c4","Type":"ContainerStarted","Data":"15e6e5e5cabf6a9e114089ef46b808da8b0a7da3dc30704d39ef026b3f176c41"} Feb 19 10:26:35 crc kubenswrapper[4811]: I0219 10:26:35.696818 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mvhxm-config-tp66b" event={"ID":"7cc320aa-b391-42ce-a0ad-83fc5ccee6c4","Type":"ContainerStarted","Data":"22f52e63388b9d511cce2d44905a7a93e3b3659f9255fff68d9307b4de561d56"} Feb 19 10:26:35 crc kubenswrapper[4811]: I0219 10:26:35.700661 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dce922ee-7833-4b43-9358-72aa7da07a03","Type":"ContainerStarted","Data":"e8828c3ef8de74a26163067cd4a330ae92b5c7c25184d363a07ac20a8e10bf5c"} Feb 19 10:26:35 crc kubenswrapper[4811]: I0219 10:26:35.700815 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dce922ee-7833-4b43-9358-72aa7da07a03","Type":"ContainerStarted","Data":"f28b81234d34bb768f1df63216b609fffe34fb5914928a0266745c29dc80a11d"} Feb 19 10:26:35 crc kubenswrapper[4811]: I0219 10:26:35.700838 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dce922ee-7833-4b43-9358-72aa7da07a03","Type":"ContainerStarted","Data":"09c26ea3a5fcc27a0073be0a4a9913d9f58f41d520ae77aef408a43e1cccb5d0"} Feb 19 10:26:35 crc kubenswrapper[4811]: I0219 10:26:35.700850 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dce922ee-7833-4b43-9358-72aa7da07a03","Type":"ContainerStarted","Data":"bf72991f76dcf5b42bee078b3ae0fca517e31b8a50ef8a7a0d6af43f51422f35"} Feb 19 10:26:35 crc kubenswrapper[4811]: I0219 10:26:35.704867 4811 generic.go:334] "Generic (PLEG): container finished" podID="c18c4186-9fb1-448f-91d4-1cb18a5cc172" containerID="aa5516e591105c4eec7a2b66b6b9c8a67d04b36ae1ece5fa766c9f44b7c02fdc" exitCode=0 Feb 19 10:26:35 crc kubenswrapper[4811]: I0219 10:26:35.704915 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6ctsp" event={"ID":"c18c4186-9fb1-448f-91d4-1cb18a5cc172","Type":"ContainerDied","Data":"aa5516e591105c4eec7a2b66b6b9c8a67d04b36ae1ece5fa766c9f44b7c02fdc"} Feb 19 10:26:35 crc kubenswrapper[4811]: I0219 10:26:35.704939 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6ctsp" event={"ID":"c18c4186-9fb1-448f-91d4-1cb18a5cc172","Type":"ContainerStarted","Data":"7016b650fc6123e21224f78baf045971ab7797c01209768ce08556a1d7210faf"} Feb 19 10:26:35 crc kubenswrapper[4811]: I0219 10:26:35.719519 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-mvhxm-config-tp66b" podStartSLOduration=6.719501458 podStartE2EDuration="6.719501458s" podCreationTimestamp="2026-02-19 10:26:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:26:35.712149381 +0000 UTC m=+1084.238614067" watchObservedRunningTime="2026-02-19 10:26:35.719501458 +0000 UTC m=+1084.245966144" Feb 19 10:26:36 crc kubenswrapper[4811]: I0219 10:26:36.715271 4811 generic.go:334] "Generic (PLEG): container finished" podID="7cc320aa-b391-42ce-a0ad-83fc5ccee6c4" containerID="15e6e5e5cabf6a9e114089ef46b808da8b0a7da3dc30704d39ef026b3f176c41" exitCode=0 Feb 19 10:26:36 crc kubenswrapper[4811]: I0219 10:26:36.715416 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mvhxm-config-tp66b" event={"ID":"7cc320aa-b391-42ce-a0ad-83fc5ccee6c4","Type":"ContainerDied","Data":"15e6e5e5cabf6a9e114089ef46b808da8b0a7da3dc30704d39ef026b3f176c41"} Feb 19 10:26:37 crc kubenswrapper[4811]: I0219 10:26:37.055630 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6ctsp" Feb 19 10:26:37 crc kubenswrapper[4811]: I0219 10:26:37.250698 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2l2x\" (UniqueName: \"kubernetes.io/projected/c18c4186-9fb1-448f-91d4-1cb18a5cc172-kube-api-access-w2l2x\") pod \"c18c4186-9fb1-448f-91d4-1cb18a5cc172\" (UID: \"c18c4186-9fb1-448f-91d4-1cb18a5cc172\") " Feb 19 10:26:37 crc kubenswrapper[4811]: I0219 10:26:37.250870 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c18c4186-9fb1-448f-91d4-1cb18a5cc172-operator-scripts\") pod \"c18c4186-9fb1-448f-91d4-1cb18a5cc172\" (UID: \"c18c4186-9fb1-448f-91d4-1cb18a5cc172\") " Feb 19 10:26:37 crc kubenswrapper[4811]: I0219 10:26:37.251351 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c18c4186-9fb1-448f-91d4-1cb18a5cc172-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c18c4186-9fb1-448f-91d4-1cb18a5cc172" (UID: "c18c4186-9fb1-448f-91d4-1cb18a5cc172"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:26:37 crc kubenswrapper[4811]: I0219 10:26:37.256595 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c18c4186-9fb1-448f-91d4-1cb18a5cc172-kube-api-access-w2l2x" (OuterVolumeSpecName: "kube-api-access-w2l2x") pod "c18c4186-9fb1-448f-91d4-1cb18a5cc172" (UID: "c18c4186-9fb1-448f-91d4-1cb18a5cc172"). InnerVolumeSpecName "kube-api-access-w2l2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:26:37 crc kubenswrapper[4811]: I0219 10:26:37.352710 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c18c4186-9fb1-448f-91d4-1cb18a5cc172-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:37 crc kubenswrapper[4811]: I0219 10:26:37.352754 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2l2x\" (UniqueName: \"kubernetes.io/projected/c18c4186-9fb1-448f-91d4-1cb18a5cc172-kube-api-access-w2l2x\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:37 crc kubenswrapper[4811]: I0219 10:26:37.740129 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dce922ee-7833-4b43-9358-72aa7da07a03","Type":"ContainerStarted","Data":"162a22fcf6f615be80ff353ef290e2308ca06fdfad0ba2907c05b9016450a32f"} Feb 19 10:26:37 crc kubenswrapper[4811]: I0219 10:26:37.740491 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dce922ee-7833-4b43-9358-72aa7da07a03","Type":"ContainerStarted","Data":"40f53895090e6793952807f87f70891a53ca1a35775f87c046215b001060d828"} Feb 19 10:26:37 crc kubenswrapper[4811]: I0219 10:26:37.743337 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6ctsp" event={"ID":"c18c4186-9fb1-448f-91d4-1cb18a5cc172","Type":"ContainerDied","Data":"7016b650fc6123e21224f78baf045971ab7797c01209768ce08556a1d7210faf"} Feb 19 10:26:37 crc kubenswrapper[4811]: I0219 10:26:37.743353 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6ctsp" Feb 19 10:26:37 crc kubenswrapper[4811]: I0219 10:26:37.743365 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7016b650fc6123e21224f78baf045971ab7797c01209768ce08556a1d7210faf" Feb 19 10:26:37 crc kubenswrapper[4811]: E0219 10:26:37.835017 4811 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc18c4186_9fb1_448f_91d4_1cb18a5cc172.slice\": RecentStats: unable to find data in memory cache]" Feb 19 10:26:38 crc kubenswrapper[4811]: I0219 10:26:38.217875 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mvhxm-config-tp66b" Feb 19 10:26:38 crc kubenswrapper[4811]: I0219 10:26:38.368057 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7cc320aa-b391-42ce-a0ad-83fc5ccee6c4-scripts\") pod \"7cc320aa-b391-42ce-a0ad-83fc5ccee6c4\" (UID: \"7cc320aa-b391-42ce-a0ad-83fc5ccee6c4\") " Feb 19 10:26:38 crc kubenswrapper[4811]: I0219 10:26:38.368654 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7cc320aa-b391-42ce-a0ad-83fc5ccee6c4-additional-scripts\") pod \"7cc320aa-b391-42ce-a0ad-83fc5ccee6c4\" (UID: \"7cc320aa-b391-42ce-a0ad-83fc5ccee6c4\") " Feb 19 10:26:38 crc kubenswrapper[4811]: I0219 10:26:38.368724 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7cc320aa-b391-42ce-a0ad-83fc5ccee6c4-var-run\") pod \"7cc320aa-b391-42ce-a0ad-83fc5ccee6c4\" (UID: \"7cc320aa-b391-42ce-a0ad-83fc5ccee6c4\") " Feb 19 10:26:38 crc kubenswrapper[4811]: I0219 10:26:38.368801 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k627d\" (UniqueName: \"kubernetes.io/projected/7cc320aa-b391-42ce-a0ad-83fc5ccee6c4-kube-api-access-k627d\") pod \"7cc320aa-b391-42ce-a0ad-83fc5ccee6c4\" (UID: \"7cc320aa-b391-42ce-a0ad-83fc5ccee6c4\") " Feb 19 10:26:38 crc kubenswrapper[4811]: I0219 10:26:38.368839 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cc320aa-b391-42ce-a0ad-83fc5ccee6c4-var-run" (OuterVolumeSpecName: "var-run") pod "7cc320aa-b391-42ce-a0ad-83fc5ccee6c4" (UID: "7cc320aa-b391-42ce-a0ad-83fc5ccee6c4"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:26:38 crc kubenswrapper[4811]: I0219 10:26:38.368866 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7cc320aa-b391-42ce-a0ad-83fc5ccee6c4-var-log-ovn\") pod \"7cc320aa-b391-42ce-a0ad-83fc5ccee6c4\" (UID: \"7cc320aa-b391-42ce-a0ad-83fc5ccee6c4\") " Feb 19 10:26:38 crc kubenswrapper[4811]: I0219 10:26:38.368891 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-mvhxm-config-tp66b"] Feb 19 10:26:38 crc kubenswrapper[4811]: I0219 10:26:38.369018 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7cc320aa-b391-42ce-a0ad-83fc5ccee6c4-var-run-ovn\") pod \"7cc320aa-b391-42ce-a0ad-83fc5ccee6c4\" (UID: \"7cc320aa-b391-42ce-a0ad-83fc5ccee6c4\") " Feb 19 10:26:38 crc kubenswrapper[4811]: I0219 10:26:38.369065 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cc320aa-b391-42ce-a0ad-83fc5ccee6c4-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "7cc320aa-b391-42ce-a0ad-83fc5ccee6c4" (UID: "7cc320aa-b391-42ce-a0ad-83fc5ccee6c4"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:26:38 crc kubenswrapper[4811]: I0219 10:26:38.369123 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cc320aa-b391-42ce-a0ad-83fc5ccee6c4-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "7cc320aa-b391-42ce-a0ad-83fc5ccee6c4" (UID: "7cc320aa-b391-42ce-a0ad-83fc5ccee6c4"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:26:38 crc kubenswrapper[4811]: I0219 10:26:38.369195 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cc320aa-b391-42ce-a0ad-83fc5ccee6c4-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "7cc320aa-b391-42ce-a0ad-83fc5ccee6c4" (UID: "7cc320aa-b391-42ce-a0ad-83fc5ccee6c4"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:26:38 crc kubenswrapper[4811]: I0219 10:26:38.369329 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cc320aa-b391-42ce-a0ad-83fc5ccee6c4-scripts" (OuterVolumeSpecName: "scripts") pod "7cc320aa-b391-42ce-a0ad-83fc5ccee6c4" (UID: "7cc320aa-b391-42ce-a0ad-83fc5ccee6c4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:26:38 crc kubenswrapper[4811]: I0219 10:26:38.369836 4811 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7cc320aa-b391-42ce-a0ad-83fc5ccee6c4-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:38 crc kubenswrapper[4811]: I0219 10:26:38.369865 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7cc320aa-b391-42ce-a0ad-83fc5ccee6c4-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:38 crc kubenswrapper[4811]: I0219 10:26:38.369877 4811 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7cc320aa-b391-42ce-a0ad-83fc5ccee6c4-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:38 crc kubenswrapper[4811]: I0219 10:26:38.369891 4811 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7cc320aa-b391-42ce-a0ad-83fc5ccee6c4-var-run\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:38 crc kubenswrapper[4811]: I0219 10:26:38.369903 4811 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7cc320aa-b391-42ce-a0ad-83fc5ccee6c4-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:38 crc kubenswrapper[4811]: I0219 10:26:38.374133 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cc320aa-b391-42ce-a0ad-83fc5ccee6c4-kube-api-access-k627d" (OuterVolumeSpecName: "kube-api-access-k627d") pod "7cc320aa-b391-42ce-a0ad-83fc5ccee6c4" (UID: "7cc320aa-b391-42ce-a0ad-83fc5ccee6c4"). InnerVolumeSpecName "kube-api-access-k627d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:26:38 crc kubenswrapper[4811]: I0219 10:26:38.377752 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-mvhxm-config-tp66b"] Feb 19 10:26:38 crc kubenswrapper[4811]: I0219 10:26:38.470828 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k627d\" (UniqueName: \"kubernetes.io/projected/7cc320aa-b391-42ce-a0ad-83fc5ccee6c4-kube-api-access-k627d\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:38 crc kubenswrapper[4811]: I0219 10:26:38.592745 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cc320aa-b391-42ce-a0ad-83fc5ccee6c4" path="/var/lib/kubelet/pods/7cc320aa-b391-42ce-a0ad-83fc5ccee6c4/volumes" Feb 19 10:26:38 crc kubenswrapper[4811]: I0219 10:26:38.753617 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dce922ee-7833-4b43-9358-72aa7da07a03","Type":"ContainerStarted","Data":"f183a61a3cb4f4e4534a2cfe62909f1726e0195d339d085ad30852c1f55e5e87"} Feb 19 10:26:38 crc kubenswrapper[4811]: I0219 10:26:38.753668 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dce922ee-7833-4b43-9358-72aa7da07a03","Type":"ContainerStarted","Data":"8a84e4594adab8112b3c54df734be7ae58396ee455f2e87cda2add3a4100116f"} Feb 19 10:26:38 crc kubenswrapper[4811]: I0219 10:26:38.755146 4811 scope.go:117] "RemoveContainer" containerID="15e6e5e5cabf6a9e114089ef46b808da8b0a7da3dc30704d39ef026b3f176c41" Feb 19 10:26:38 crc kubenswrapper[4811]: I0219 10:26:38.755202 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mvhxm-config-tp66b" Feb 19 10:26:39 crc kubenswrapper[4811]: I0219 10:26:39.588146 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-mvhxm" Feb 19 10:26:40 crc kubenswrapper[4811]: I0219 10:26:40.229643 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 19 10:26:40 crc kubenswrapper[4811]: I0219 10:26:40.551654 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:26:40 crc kubenswrapper[4811]: I0219 10:26:40.783673 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dce922ee-7833-4b43-9358-72aa7da07a03","Type":"ContainerStarted","Data":"8f118dc0789bcbe68d3e8b81322de5220ee35fb6163e6e269da2c7cb272f9ba8"} Feb 19 10:26:40 crc kubenswrapper[4811]: I0219 10:26:40.783729 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dce922ee-7833-4b43-9358-72aa7da07a03","Type":"ContainerStarted","Data":"82f5795e8cb484dd570db6e3a98cb23b97d19261b27d6bba08ce8a897b070f5e"} Feb 19 10:26:40 crc kubenswrapper[4811]: I0219 10:26:40.783745 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dce922ee-7833-4b43-9358-72aa7da07a03","Type":"ContainerStarted","Data":"154f0658556ac1f1136e4166a25abc003f764514996a9c24b26250b844673ef7"} Feb 19 10:26:40 crc kubenswrapper[4811]: I0219 10:26:40.895450 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-wq4nk"] Feb 19 10:26:40 crc kubenswrapper[4811]: E0219 10:26:40.895859 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c18c4186-9fb1-448f-91d4-1cb18a5cc172" containerName="mariadb-account-create-update" Feb 19 10:26:40 crc kubenswrapper[4811]: I0219 10:26:40.895882 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="c18c4186-9fb1-448f-91d4-1cb18a5cc172" containerName="mariadb-account-create-update" Feb 19 10:26:40 crc kubenswrapper[4811]: E0219 10:26:40.895918 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cc320aa-b391-42ce-a0ad-83fc5ccee6c4" containerName="ovn-config" Feb 19 10:26:40 crc kubenswrapper[4811]: I0219 10:26:40.895927 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cc320aa-b391-42ce-a0ad-83fc5ccee6c4" containerName="ovn-config" Feb 19 10:26:40 crc kubenswrapper[4811]: I0219 10:26:40.896136 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cc320aa-b391-42ce-a0ad-83fc5ccee6c4" containerName="ovn-config" Feb 19 10:26:40 crc kubenswrapper[4811]: I0219 10:26:40.896167 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="c18c4186-9fb1-448f-91d4-1cb18a5cc172" containerName="mariadb-account-create-update" Feb 19 10:26:40 crc kubenswrapper[4811]: I0219 10:26:40.896819 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wq4nk" Feb 19 10:26:40 crc kubenswrapper[4811]: I0219 10:26:40.914284 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-wq4nk"] Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.000293 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-5217-account-create-update-bdx6j"] Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.002054 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5217-account-create-update-bdx6j" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.010921 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.021241 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q68l\" (UniqueName: \"kubernetes.io/projected/774737fd-2783-4012-b2f7-26966fc085d9-kube-api-access-8q68l\") pod \"cinder-db-create-wq4nk\" (UID: \"774737fd-2783-4012-b2f7-26966fc085d9\") " pod="openstack/cinder-db-create-wq4nk" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.021371 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/774737fd-2783-4012-b2f7-26966fc085d9-operator-scripts\") pod \"cinder-db-create-wq4nk\" (UID: \"774737fd-2783-4012-b2f7-26966fc085d9\") " pod="openstack/cinder-db-create-wq4nk" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.027670 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5217-account-create-update-bdx6j"] Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.088151 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-xlmv4"] Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.096203 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xlmv4" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.097297 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-xlmv4"] Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.122620 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxdrp\" (UniqueName: \"kubernetes.io/projected/479fe0ed-404c-4378-b035-e7518e74e8db-kube-api-access-vxdrp\") pod \"cinder-5217-account-create-update-bdx6j\" (UID: \"479fe0ed-404c-4378-b035-e7518e74e8db\") " pod="openstack/cinder-5217-account-create-update-bdx6j" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.122676 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/479fe0ed-404c-4378-b035-e7518e74e8db-operator-scripts\") pod \"cinder-5217-account-create-update-bdx6j\" (UID: \"479fe0ed-404c-4378-b035-e7518e74e8db\") " pod="openstack/cinder-5217-account-create-update-bdx6j" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.122703 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q68l\" (UniqueName: \"kubernetes.io/projected/774737fd-2783-4012-b2f7-26966fc085d9-kube-api-access-8q68l\") pod \"cinder-db-create-wq4nk\" (UID: \"774737fd-2783-4012-b2f7-26966fc085d9\") " pod="openstack/cinder-db-create-wq4nk" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.122752 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/774737fd-2783-4012-b2f7-26966fc085d9-operator-scripts\") pod \"cinder-db-create-wq4nk\" (UID: \"774737fd-2783-4012-b2f7-26966fc085d9\") " pod="openstack/cinder-db-create-wq4nk" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.123409 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/774737fd-2783-4012-b2f7-26966fc085d9-operator-scripts\") pod \"cinder-db-create-wq4nk\" (UID: \"774737fd-2783-4012-b2f7-26966fc085d9\") " pod="openstack/cinder-db-create-wq4nk" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.175058 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q68l\" (UniqueName: \"kubernetes.io/projected/774737fd-2783-4012-b2f7-26966fc085d9-kube-api-access-8q68l\") pod \"cinder-db-create-wq4nk\" (UID: \"774737fd-2783-4012-b2f7-26966fc085d9\") " pod="openstack/cinder-db-create-wq4nk" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.226011 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nh7x\" (UniqueName: \"kubernetes.io/projected/3390f3ef-b993-411b-85ff-e90bb430960c-kube-api-access-9nh7x\") pod \"neutron-db-create-xlmv4\" (UID: \"3390f3ef-b993-411b-85ff-e90bb430960c\") " pod="openstack/neutron-db-create-xlmv4" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.226136 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxdrp\" (UniqueName: \"kubernetes.io/projected/479fe0ed-404c-4378-b035-e7518e74e8db-kube-api-access-vxdrp\") pod \"cinder-5217-account-create-update-bdx6j\" (UID: \"479fe0ed-404c-4378-b035-e7518e74e8db\") " pod="openstack/cinder-5217-account-create-update-bdx6j" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.226171 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/479fe0ed-404c-4378-b035-e7518e74e8db-operator-scripts\") pod \"cinder-5217-account-create-update-bdx6j\" (UID: \"479fe0ed-404c-4378-b035-e7518e74e8db\") " pod="openstack/cinder-5217-account-create-update-bdx6j" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.226206 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3390f3ef-b993-411b-85ff-e90bb430960c-operator-scripts\") pod \"neutron-db-create-xlmv4\" (UID: \"3390f3ef-b993-411b-85ff-e90bb430960c\") " pod="openstack/neutron-db-create-xlmv4" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.226887 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/479fe0ed-404c-4378-b035-e7518e74e8db-operator-scripts\") pod \"cinder-5217-account-create-update-bdx6j\" (UID: \"479fe0ed-404c-4378-b035-e7518e74e8db\") " pod="openstack/cinder-5217-account-create-update-bdx6j" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.253535 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxdrp\" (UniqueName: \"kubernetes.io/projected/479fe0ed-404c-4378-b035-e7518e74e8db-kube-api-access-vxdrp\") pod \"cinder-5217-account-create-update-bdx6j\" (UID: \"479fe0ed-404c-4378-b035-e7518e74e8db\") " pod="openstack/cinder-5217-account-create-update-bdx6j" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.264913 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wq4nk" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.279496 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-hcwkd"] Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.280485 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hcwkd" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.317075 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-hcwkd"] Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.328900 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nh7x\" (UniqueName: \"kubernetes.io/projected/3390f3ef-b993-411b-85ff-e90bb430960c-kube-api-access-9nh7x\") pod \"neutron-db-create-xlmv4\" (UID: \"3390f3ef-b993-411b-85ff-e90bb430960c\") " pod="openstack/neutron-db-create-xlmv4" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.328993 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d02e9a5-6f2e-4754-865e-51492f4f81f7-operator-scripts\") pod \"barbican-db-create-hcwkd\" (UID: \"5d02e9a5-6f2e-4754-865e-51492f4f81f7\") " pod="openstack/barbican-db-create-hcwkd" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.329063 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfcm4\" (UniqueName: \"kubernetes.io/projected/5d02e9a5-6f2e-4754-865e-51492f4f81f7-kube-api-access-sfcm4\") pod \"barbican-db-create-hcwkd\" (UID: \"5d02e9a5-6f2e-4754-865e-51492f4f81f7\") " pod="openstack/barbican-db-create-hcwkd" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.329107 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3390f3ef-b993-411b-85ff-e90bb430960c-operator-scripts\") pod \"neutron-db-create-xlmv4\" (UID: \"3390f3ef-b993-411b-85ff-e90bb430960c\") " pod="openstack/neutron-db-create-xlmv4" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.330185 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3390f3ef-b993-411b-85ff-e90bb430960c-operator-scripts\") pod \"neutron-db-create-xlmv4\" (UID: \"3390f3ef-b993-411b-85ff-e90bb430960c\") " pod="openstack/neutron-db-create-xlmv4" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.368388 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nh7x\" (UniqueName: \"kubernetes.io/projected/3390f3ef-b993-411b-85ff-e90bb430960c-kube-api-access-9nh7x\") pod \"neutron-db-create-xlmv4\" (UID: \"3390f3ef-b993-411b-85ff-e90bb430960c\") " pod="openstack/neutron-db-create-xlmv4" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.385734 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8130-account-create-update-trnwt"] Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.386928 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8130-account-create-update-trnwt" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.391629 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.395608 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-xq7x5"] Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.396837 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-xq7x5" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.401742 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.402115 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.402546 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-znphl" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.411771 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.444941 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9bab52b-f5c8-425d-bdc0-601e101eb0a4-operator-scripts\") pod \"neutron-8130-account-create-update-trnwt\" (UID: \"f9bab52b-f5c8-425d-bdc0-601e101eb0a4\") " pod="openstack/neutron-8130-account-create-update-trnwt" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.445127 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d02e9a5-6f2e-4754-865e-51492f4f81f7-operator-scripts\") pod \"barbican-db-create-hcwkd\" (UID: \"5d02e9a5-6f2e-4754-865e-51492f4f81f7\") " pod="openstack/barbican-db-create-hcwkd" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.445297 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqv27\" (UniqueName: \"kubernetes.io/projected/f9bab52b-f5c8-425d-bdc0-601e101eb0a4-kube-api-access-sqv27\") pod \"neutron-8130-account-create-update-trnwt\" (UID: \"f9bab52b-f5c8-425d-bdc0-601e101eb0a4\") " pod="openstack/neutron-8130-account-create-update-trnwt" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.445589 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfcm4\" (UniqueName: \"kubernetes.io/projected/5d02e9a5-6f2e-4754-865e-51492f4f81f7-kube-api-access-sfcm4\") pod \"barbican-db-create-hcwkd\" (UID: \"5d02e9a5-6f2e-4754-865e-51492f4f81f7\") " pod="openstack/barbican-db-create-hcwkd" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.461494 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d02e9a5-6f2e-4754-865e-51492f4f81f7-operator-scripts\") pod \"barbican-db-create-hcwkd\" (UID: \"5d02e9a5-6f2e-4754-865e-51492f4f81f7\") " pod="openstack/barbican-db-create-hcwkd" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.468619 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5217-account-create-update-bdx6j" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.474903 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8130-account-create-update-trnwt"] Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.480743 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfcm4\" (UniqueName: \"kubernetes.io/projected/5d02e9a5-6f2e-4754-865e-51492f4f81f7-kube-api-access-sfcm4\") pod \"barbican-db-create-hcwkd\" (UID: \"5d02e9a5-6f2e-4754-865e-51492f4f81f7\") " pod="openstack/barbican-db-create-hcwkd" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.488091 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-xq7x5"] Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.509711 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xlmv4" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.522987 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-2d23-account-create-update-bgrsz"] Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.526894 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2d23-account-create-update-bgrsz" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.534762 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.554322 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpbr9\" (UniqueName: \"kubernetes.io/projected/9c179a32-5560-4ed7-9e71-aaf2f196fb8e-kube-api-access-dpbr9\") pod \"keystone-db-sync-xq7x5\" (UID: \"9c179a32-5560-4ed7-9e71-aaf2f196fb8e\") " pod="openstack/keystone-db-sync-xq7x5" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.554596 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c179a32-5560-4ed7-9e71-aaf2f196fb8e-config-data\") pod \"keystone-db-sync-xq7x5\" (UID: \"9c179a32-5560-4ed7-9e71-aaf2f196fb8e\") " pod="openstack/keystone-db-sync-xq7x5" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.554776 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqv27\" (UniqueName: \"kubernetes.io/projected/f9bab52b-f5c8-425d-bdc0-601e101eb0a4-kube-api-access-sqv27\") pod \"neutron-8130-account-create-update-trnwt\" (UID: \"f9bab52b-f5c8-425d-bdc0-601e101eb0a4\") " pod="openstack/neutron-8130-account-create-update-trnwt" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.554933 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c179a32-5560-4ed7-9e71-aaf2f196fb8e-combined-ca-bundle\") pod \"keystone-db-sync-xq7x5\" (UID: \"9c179a32-5560-4ed7-9e71-aaf2f196fb8e\") " pod="openstack/keystone-db-sync-xq7x5" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.559334 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9bab52b-f5c8-425d-bdc0-601e101eb0a4-operator-scripts\") pod \"neutron-8130-account-create-update-trnwt\" (UID: \"f9bab52b-f5c8-425d-bdc0-601e101eb0a4\") " pod="openstack/neutron-8130-account-create-update-trnwt" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.561037 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9bab52b-f5c8-425d-bdc0-601e101eb0a4-operator-scripts\") pod \"neutron-8130-account-create-update-trnwt\" (UID: \"f9bab52b-f5c8-425d-bdc0-601e101eb0a4\") " pod="openstack/neutron-8130-account-create-update-trnwt" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.586137 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-2d23-account-create-update-bgrsz"] Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.591753 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqv27\" (UniqueName: \"kubernetes.io/projected/f9bab52b-f5c8-425d-bdc0-601e101eb0a4-kube-api-access-sqv27\") pod \"neutron-8130-account-create-update-trnwt\" (UID: \"f9bab52b-f5c8-425d-bdc0-601e101eb0a4\") " pod="openstack/neutron-8130-account-create-update-trnwt" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.621746 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hcwkd" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.660979 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29308f94-60d2-402b-a4f0-76652537e794-operator-scripts\") pod \"barbican-2d23-account-create-update-bgrsz\" (UID: \"29308f94-60d2-402b-a4f0-76652537e794\") " pod="openstack/barbican-2d23-account-create-update-bgrsz" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.661245 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpbr9\" (UniqueName: \"kubernetes.io/projected/9c179a32-5560-4ed7-9e71-aaf2f196fb8e-kube-api-access-dpbr9\") pod \"keystone-db-sync-xq7x5\" (UID: \"9c179a32-5560-4ed7-9e71-aaf2f196fb8e\") " pod="openstack/keystone-db-sync-xq7x5" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.661416 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c179a32-5560-4ed7-9e71-aaf2f196fb8e-config-data\") pod \"keystone-db-sync-xq7x5\" (UID: \"9c179a32-5560-4ed7-9e71-aaf2f196fb8e\") " pod="openstack/keystone-db-sync-xq7x5" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.661559 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c179a32-5560-4ed7-9e71-aaf2f196fb8e-combined-ca-bundle\") pod \"keystone-db-sync-xq7x5\" (UID: \"9c179a32-5560-4ed7-9e71-aaf2f196fb8e\") " pod="openstack/keystone-db-sync-xq7x5" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.661652 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxf4d\" (UniqueName: \"kubernetes.io/projected/29308f94-60d2-402b-a4f0-76652537e794-kube-api-access-dxf4d\") pod \"barbican-2d23-account-create-update-bgrsz\" (UID: \"29308f94-60d2-402b-a4f0-76652537e794\") " pod="openstack/barbican-2d23-account-create-update-bgrsz" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.679523 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c179a32-5560-4ed7-9e71-aaf2f196fb8e-config-data\") pod \"keystone-db-sync-xq7x5\" (UID: \"9c179a32-5560-4ed7-9e71-aaf2f196fb8e\") " pod="openstack/keystone-db-sync-xq7x5" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.679657 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c179a32-5560-4ed7-9e71-aaf2f196fb8e-combined-ca-bundle\") pod \"keystone-db-sync-xq7x5\" (UID: \"9c179a32-5560-4ed7-9e71-aaf2f196fb8e\") " pod="openstack/keystone-db-sync-xq7x5" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.695185 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpbr9\" (UniqueName: \"kubernetes.io/projected/9c179a32-5560-4ed7-9e71-aaf2f196fb8e-kube-api-access-dpbr9\") pod \"keystone-db-sync-xq7x5\" (UID: \"9c179a32-5560-4ed7-9e71-aaf2f196fb8e\") " pod="openstack/keystone-db-sync-xq7x5" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.760831 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8130-account-create-update-trnwt" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.764194 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29308f94-60d2-402b-a4f0-76652537e794-operator-scripts\") pod \"barbican-2d23-account-create-update-bgrsz\" (UID: \"29308f94-60d2-402b-a4f0-76652537e794\") " pod="openstack/barbican-2d23-account-create-update-bgrsz" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.764306 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxf4d\" (UniqueName: \"kubernetes.io/projected/29308f94-60d2-402b-a4f0-76652537e794-kube-api-access-dxf4d\") pod \"barbican-2d23-account-create-update-bgrsz\" (UID: \"29308f94-60d2-402b-a4f0-76652537e794\") " pod="openstack/barbican-2d23-account-create-update-bgrsz" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.764833 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29308f94-60d2-402b-a4f0-76652537e794-operator-scripts\") pod \"barbican-2d23-account-create-update-bgrsz\" (UID: \"29308f94-60d2-402b-a4f0-76652537e794\") " pod="openstack/barbican-2d23-account-create-update-bgrsz" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.789073 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxf4d\" (UniqueName: \"kubernetes.io/projected/29308f94-60d2-402b-a4f0-76652537e794-kube-api-access-dxf4d\") pod \"barbican-2d23-account-create-update-bgrsz\" (UID: \"29308f94-60d2-402b-a4f0-76652537e794\") " pod="openstack/barbican-2d23-account-create-update-bgrsz" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.841069 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-xq7x5" Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.895025 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dce922ee-7833-4b43-9358-72aa7da07a03","Type":"ContainerStarted","Data":"d54e081deaa5b11d1d4052a778fbf81613fe1d314df9217874b56fce47059889"} Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.895503 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dce922ee-7833-4b43-9358-72aa7da07a03","Type":"ContainerStarted","Data":"8c28090d664dd70abda4717ec86fe09c36a2003dad942b593e3b76fcfce3c85b"} Feb 19 10:26:41 crc kubenswrapper[4811]: I0219 10:26:41.899831 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2d23-account-create-update-bgrsz" Feb 19 10:26:42 crc kubenswrapper[4811]: I0219 10:26:42.175873 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-wq4nk"] Feb 19 10:26:42 crc kubenswrapper[4811]: I0219 10:26:42.533193 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5217-account-create-update-bdx6j"] Feb 19 10:26:42 crc kubenswrapper[4811]: W0219 10:26:42.554733 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod479fe0ed_404c_4378_b035_e7518e74e8db.slice/crio-0462deaed7f513a0446a1837bc768c42288ce52f4ed1b1d251ba497c457be549 WatchSource:0}: Error finding container 0462deaed7f513a0446a1837bc768c42288ce52f4ed1b1d251ba497c457be549: Status 404 returned error can't find the container with id 0462deaed7f513a0446a1837bc768c42288ce52f4ed1b1d251ba497c457be549 Feb 19 10:26:42 crc kubenswrapper[4811]: I0219 10:26:42.641473 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-xlmv4"] Feb 19 10:26:42 crc kubenswrapper[4811]: I0219 10:26:42.739606 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8130-account-create-update-trnwt"] Feb 19 10:26:42 crc kubenswrapper[4811]: I0219 10:26:42.764548 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-xq7x5"] Feb 19 10:26:42 crc kubenswrapper[4811]: W0219 10:26:42.776028 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c179a32_5560_4ed7_9e71_aaf2f196fb8e.slice/crio-5f3c14eaa88ba9eb1a3aa57116bcaa2cac191a3139bfa0041c0ea1b1d73af359 WatchSource:0}: Error finding container 5f3c14eaa88ba9eb1a3aa57116bcaa2cac191a3139bfa0041c0ea1b1d73af359: Status 404 returned error can't find the container with id 5f3c14eaa88ba9eb1a3aa57116bcaa2cac191a3139bfa0041c0ea1b1d73af359 Feb 19 10:26:42 crc kubenswrapper[4811]: I0219 10:26:42.780497 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-hcwkd"] Feb 19 10:26:42 crc kubenswrapper[4811]: I0219 10:26:42.873826 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-2d23-account-create-update-bgrsz"] Feb 19 10:26:42 crc kubenswrapper[4811]: I0219 10:26:42.943075 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-xq7x5" event={"ID":"9c179a32-5560-4ed7-9e71-aaf2f196fb8e","Type":"ContainerStarted","Data":"5f3c14eaa88ba9eb1a3aa57116bcaa2cac191a3139bfa0041c0ea1b1d73af359"} Feb 19 10:26:42 crc kubenswrapper[4811]: I0219 10:26:42.961030 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5217-account-create-update-bdx6j" event={"ID":"479fe0ed-404c-4378-b035-e7518e74e8db","Type":"ContainerStarted","Data":"ca739c4be4dd27c9d7d3a1a51bcea7c7eb0729f58f268e5111137f37230fa9f8"} Feb 19 10:26:42 crc kubenswrapper[4811]: I0219 10:26:42.961104 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5217-account-create-update-bdx6j" event={"ID":"479fe0ed-404c-4378-b035-e7518e74e8db","Type":"ContainerStarted","Data":"0462deaed7f513a0446a1837bc768c42288ce52f4ed1b1d251ba497c457be549"} Feb 19 10:26:42 crc kubenswrapper[4811]: I0219 10:26:42.971392 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xlmv4" event={"ID":"3390f3ef-b993-411b-85ff-e90bb430960c","Type":"ContainerStarted","Data":"eb1ea409a17e46d2202c3d704b7c5d51f91777151f1c999e4308c4d676dea67e"} Feb 19 10:26:42 crc kubenswrapper[4811]: I0219 10:26:42.984269 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-5217-account-create-update-bdx6j" podStartSLOduration=2.980486219 podStartE2EDuration="2.980486219s" podCreationTimestamp="2026-02-19 10:26:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:26:42.978033936 +0000 UTC m=+1091.504498612" watchObservedRunningTime="2026-02-19 10:26:42.980486219 +0000 UTC m=+1091.506950905" Feb 19 10:26:42 crc kubenswrapper[4811]: I0219 10:26:42.995856 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8130-account-create-update-trnwt" event={"ID":"f9bab52b-f5c8-425d-bdc0-601e101eb0a4","Type":"ContainerStarted","Data":"23c787d4a9ef4ce7052da72097abdf49ff5f253c41f6564f2b818deb5662edda"} Feb 19 10:26:43 crc kubenswrapper[4811]: I0219 10:26:43.016598 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-hcwkd" event={"ID":"5d02e9a5-6f2e-4754-865e-51492f4f81f7","Type":"ContainerStarted","Data":"e754929551086e34798e61ae6e9aea921e320b5a99a77be10ed5111a7f9a5bec"} Feb 19 10:26:43 crc kubenswrapper[4811]: I0219 10:26:43.035567 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dce922ee-7833-4b43-9358-72aa7da07a03","Type":"ContainerStarted","Data":"b745fc67058113d2de1d01d11eef0b927a558ca47812328c166a2e7331f3146e"} Feb 19 10:26:43 crc kubenswrapper[4811]: I0219 10:26:43.035622 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dce922ee-7833-4b43-9358-72aa7da07a03","Type":"ContainerStarted","Data":"4fdc97c97481c2aacd41d578f8a458b1961f6031928cbb3dbf9510cdc141c6e7"} Feb 19 10:26:43 crc kubenswrapper[4811]: I0219 10:26:43.055893 4811 generic.go:334] "Generic (PLEG): container finished" podID="774737fd-2783-4012-b2f7-26966fc085d9" containerID="c914bf592c5bef92aa7219d1f7e3fb6ed4cc9e8bd930d693b31b01bbeda82ba4" exitCode=0 Feb 19 10:26:43 crc kubenswrapper[4811]: I0219 10:26:43.055947 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wq4nk" event={"ID":"774737fd-2783-4012-b2f7-26966fc085d9","Type":"ContainerDied","Data":"c914bf592c5bef92aa7219d1f7e3fb6ed4cc9e8bd930d693b31b01bbeda82ba4"} Feb 19 10:26:43 crc kubenswrapper[4811]: I0219 10:26:43.055979 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wq4nk" event={"ID":"774737fd-2783-4012-b2f7-26966fc085d9","Type":"ContainerStarted","Data":"ba786453c459e897da9be0fd5b14d0726fc392354ef8a89984c114199448447f"} Feb 19 10:26:43 crc kubenswrapper[4811]: I0219 10:26:43.164441 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=22.077882765 podStartE2EDuration="38.164422861s" podCreationTimestamp="2026-02-19 10:26:05 +0000 UTC" firstStartedPulling="2026-02-19 10:26:23.784259289 +0000 UTC m=+1072.310723975" lastFinishedPulling="2026-02-19 10:26:39.870799385 +0000 UTC m=+1088.397264071" observedRunningTime="2026-02-19 10:26:43.155307539 +0000 UTC m=+1091.681772225" watchObservedRunningTime="2026-02-19 10:26:43.164422861 +0000 UTC m=+1091.690887547" Feb 19 10:26:43 crc kubenswrapper[4811]: I0219 10:26:43.556328 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-lwl9w"] Feb 19 10:26:43 crc kubenswrapper[4811]: I0219 10:26:43.575133 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-lwl9w" Feb 19 10:26:43 crc kubenswrapper[4811]: I0219 10:26:43.579355 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 19 10:26:43 crc kubenswrapper[4811]: I0219 10:26:43.588037 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-lwl9w"] Feb 19 10:26:43 crc kubenswrapper[4811]: I0219 10:26:43.630392 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/15311a54-a0df-4c3d-b58f-384839b49b34-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-lwl9w\" (UID: \"15311a54-a0df-4c3d-b58f-384839b49b34\") " pod="openstack/dnsmasq-dns-764c5664d7-lwl9w" Feb 19 10:26:43 crc kubenswrapper[4811]: I0219 10:26:43.630526 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnq8f\" (UniqueName: \"kubernetes.io/projected/15311a54-a0df-4c3d-b58f-384839b49b34-kube-api-access-gnq8f\") pod \"dnsmasq-dns-764c5664d7-lwl9w\" (UID: \"15311a54-a0df-4c3d-b58f-384839b49b34\") " pod="openstack/dnsmasq-dns-764c5664d7-lwl9w" Feb 19 10:26:43 crc kubenswrapper[4811]: I0219 10:26:43.630703 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15311a54-a0df-4c3d-b58f-384839b49b34-dns-svc\") pod \"dnsmasq-dns-764c5664d7-lwl9w\" (UID: \"15311a54-a0df-4c3d-b58f-384839b49b34\") " pod="openstack/dnsmasq-dns-764c5664d7-lwl9w" Feb 19 10:26:43 crc kubenswrapper[4811]: I0219 10:26:43.630739 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/15311a54-a0df-4c3d-b58f-384839b49b34-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-lwl9w\" (UID: \"15311a54-a0df-4c3d-b58f-384839b49b34\") " pod="openstack/dnsmasq-dns-764c5664d7-lwl9w" Feb 19 10:26:43 crc kubenswrapper[4811]: I0219 10:26:43.630777 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15311a54-a0df-4c3d-b58f-384839b49b34-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-lwl9w\" (UID: \"15311a54-a0df-4c3d-b58f-384839b49b34\") " pod="openstack/dnsmasq-dns-764c5664d7-lwl9w" Feb 19 10:26:43 crc kubenswrapper[4811]: I0219 10:26:43.630810 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15311a54-a0df-4c3d-b58f-384839b49b34-config\") pod \"dnsmasq-dns-764c5664d7-lwl9w\" (UID: \"15311a54-a0df-4c3d-b58f-384839b49b34\") " pod="openstack/dnsmasq-dns-764c5664d7-lwl9w" Feb 19 10:26:43 crc kubenswrapper[4811]: I0219 10:26:43.731697 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnq8f\" (UniqueName: \"kubernetes.io/projected/15311a54-a0df-4c3d-b58f-384839b49b34-kube-api-access-gnq8f\") pod \"dnsmasq-dns-764c5664d7-lwl9w\" (UID: \"15311a54-a0df-4c3d-b58f-384839b49b34\") " pod="openstack/dnsmasq-dns-764c5664d7-lwl9w" Feb 19 10:26:43 crc kubenswrapper[4811]: I0219 10:26:43.731896 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15311a54-a0df-4c3d-b58f-384839b49b34-dns-svc\") pod \"dnsmasq-dns-764c5664d7-lwl9w\" (UID: \"15311a54-a0df-4c3d-b58f-384839b49b34\") " pod="openstack/dnsmasq-dns-764c5664d7-lwl9w" Feb 19 10:26:43 crc kubenswrapper[4811]: I0219 10:26:43.732030 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/15311a54-a0df-4c3d-b58f-384839b49b34-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-lwl9w\" (UID: \"15311a54-a0df-4c3d-b58f-384839b49b34\") " pod="openstack/dnsmasq-dns-764c5664d7-lwl9w" Feb 19 10:26:43 crc kubenswrapper[4811]: I0219 10:26:43.732110 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15311a54-a0df-4c3d-b58f-384839b49b34-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-lwl9w\" (UID: \"15311a54-a0df-4c3d-b58f-384839b49b34\") " pod="openstack/dnsmasq-dns-764c5664d7-lwl9w" Feb 19 10:26:43 crc kubenswrapper[4811]: I0219 10:26:43.732193 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15311a54-a0df-4c3d-b58f-384839b49b34-config\") pod \"dnsmasq-dns-764c5664d7-lwl9w\" (UID: \"15311a54-a0df-4c3d-b58f-384839b49b34\") " pod="openstack/dnsmasq-dns-764c5664d7-lwl9w" Feb 19 10:26:43 crc kubenswrapper[4811]: I0219 10:26:43.732304 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/15311a54-a0df-4c3d-b58f-384839b49b34-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-lwl9w\" (UID: \"15311a54-a0df-4c3d-b58f-384839b49b34\") " pod="openstack/dnsmasq-dns-764c5664d7-lwl9w" Feb 19 10:26:43 crc kubenswrapper[4811]: I0219 10:26:43.734607 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/15311a54-a0df-4c3d-b58f-384839b49b34-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-lwl9w\" (UID: \"15311a54-a0df-4c3d-b58f-384839b49b34\") " pod="openstack/dnsmasq-dns-764c5664d7-lwl9w" Feb 19 10:26:43 crc kubenswrapper[4811]: I0219 10:26:43.735224 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15311a54-a0df-4c3d-b58f-384839b49b34-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-lwl9w\" (UID: \"15311a54-a0df-4c3d-b58f-384839b49b34\") " pod="openstack/dnsmasq-dns-764c5664d7-lwl9w" Feb 19 10:26:43 crc kubenswrapper[4811]: I0219 10:26:43.735545 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15311a54-a0df-4c3d-b58f-384839b49b34-config\") pod \"dnsmasq-dns-764c5664d7-lwl9w\" (UID: \"15311a54-a0df-4c3d-b58f-384839b49b34\") " pod="openstack/dnsmasq-dns-764c5664d7-lwl9w" Feb 19 10:26:43 crc kubenswrapper[4811]: I0219 10:26:43.735754 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/15311a54-a0df-4c3d-b58f-384839b49b34-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-lwl9w\" (UID: \"15311a54-a0df-4c3d-b58f-384839b49b34\") " pod="openstack/dnsmasq-dns-764c5664d7-lwl9w" Feb 19 10:26:43 crc kubenswrapper[4811]: I0219 10:26:43.735960 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15311a54-a0df-4c3d-b58f-384839b49b34-dns-svc\") pod \"dnsmasq-dns-764c5664d7-lwl9w\" (UID: \"15311a54-a0df-4c3d-b58f-384839b49b34\") " pod="openstack/dnsmasq-dns-764c5664d7-lwl9w" Feb 19 10:26:43 crc kubenswrapper[4811]: I0219 10:26:43.758553 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnq8f\" (UniqueName: \"kubernetes.io/projected/15311a54-a0df-4c3d-b58f-384839b49b34-kube-api-access-gnq8f\") pod \"dnsmasq-dns-764c5664d7-lwl9w\" (UID: \"15311a54-a0df-4c3d-b58f-384839b49b34\") " pod="openstack/dnsmasq-dns-764c5664d7-lwl9w" Feb 19 10:26:43 crc kubenswrapper[4811]: I0219 10:26:43.987769 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-lwl9w" Feb 19 10:26:44 crc kubenswrapper[4811]: I0219 10:26:44.070179 4811 generic.go:334] "Generic (PLEG): container finished" podID="479fe0ed-404c-4378-b035-e7518e74e8db" containerID="ca739c4be4dd27c9d7d3a1a51bcea7c7eb0729f58f268e5111137f37230fa9f8" exitCode=0 Feb 19 10:26:44 crc kubenswrapper[4811]: I0219 10:26:44.070287 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5217-account-create-update-bdx6j" event={"ID":"479fe0ed-404c-4378-b035-e7518e74e8db","Type":"ContainerDied","Data":"ca739c4be4dd27c9d7d3a1a51bcea7c7eb0729f58f268e5111137f37230fa9f8"} Feb 19 10:26:44 crc kubenswrapper[4811]: I0219 10:26:44.073130 4811 generic.go:334] "Generic (PLEG): container finished" podID="3390f3ef-b993-411b-85ff-e90bb430960c" containerID="3609919436cad504e5c0128520065daccc34908de46da06a7e7bd72aa75c6b98" exitCode=0 Feb 19 10:26:44 crc kubenswrapper[4811]: I0219 10:26:44.073174 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xlmv4" event={"ID":"3390f3ef-b993-411b-85ff-e90bb430960c","Type":"ContainerDied","Data":"3609919436cad504e5c0128520065daccc34908de46da06a7e7bd72aa75c6b98"} Feb 19 10:26:44 crc kubenswrapper[4811]: I0219 10:26:44.075742 4811 generic.go:334] "Generic (PLEG): container finished" podID="f9bab52b-f5c8-425d-bdc0-601e101eb0a4" containerID="9263bd647429f2485b27f75564f6890108b8a719d0155ef66514496a3e315aa1" exitCode=0 Feb 19 10:26:44 crc kubenswrapper[4811]: I0219 10:26:44.075784 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8130-account-create-update-trnwt" event={"ID":"f9bab52b-f5c8-425d-bdc0-601e101eb0a4","Type":"ContainerDied","Data":"9263bd647429f2485b27f75564f6890108b8a719d0155ef66514496a3e315aa1"} Feb 19 10:26:44 crc kubenswrapper[4811]: I0219 10:26:44.077577 4811 generic.go:334] "Generic (PLEG): container finished" podID="5d02e9a5-6f2e-4754-865e-51492f4f81f7" containerID="5c34e6191dc38060994eaf80a5fdcc31ae900aa69bd786bb66405cf07310918b" exitCode=0 Feb 19 10:26:44 crc kubenswrapper[4811]: I0219 10:26:44.077621 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-hcwkd" event={"ID":"5d02e9a5-6f2e-4754-865e-51492f4f81f7","Type":"ContainerDied","Data":"5c34e6191dc38060994eaf80a5fdcc31ae900aa69bd786bb66405cf07310918b"} Feb 19 10:26:44 crc kubenswrapper[4811]: I0219 10:26:44.079269 4811 generic.go:334] "Generic (PLEG): container finished" podID="29308f94-60d2-402b-a4f0-76652537e794" containerID="5711d6fc343ac0054ed86ab6c1aab0d21d8fe8fb6f37d14cb38fada227c0f52c" exitCode=0 Feb 19 10:26:44 crc kubenswrapper[4811]: I0219 10:26:44.079304 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2d23-account-create-update-bgrsz" event={"ID":"29308f94-60d2-402b-a4f0-76652537e794","Type":"ContainerDied","Data":"5711d6fc343ac0054ed86ab6c1aab0d21d8fe8fb6f37d14cb38fada227c0f52c"} Feb 19 10:26:44 crc kubenswrapper[4811]: I0219 10:26:44.079407 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2d23-account-create-update-bgrsz" event={"ID":"29308f94-60d2-402b-a4f0-76652537e794","Type":"ContainerStarted","Data":"86a44b31f9d35a700417cb9d15d082b7c990ccbd8350aa1ceb1321938c6687c2"} Feb 19 10:26:44 crc kubenswrapper[4811]: I0219 10:26:44.445585 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wq4nk" Feb 19 10:26:44 crc kubenswrapper[4811]: I0219 10:26:44.544303 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8q68l\" (UniqueName: \"kubernetes.io/projected/774737fd-2783-4012-b2f7-26966fc085d9-kube-api-access-8q68l\") pod \"774737fd-2783-4012-b2f7-26966fc085d9\" (UID: \"774737fd-2783-4012-b2f7-26966fc085d9\") " Feb 19 10:26:44 crc kubenswrapper[4811]: I0219 10:26:44.544342 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/774737fd-2783-4012-b2f7-26966fc085d9-operator-scripts\") pod \"774737fd-2783-4012-b2f7-26966fc085d9\" (UID: \"774737fd-2783-4012-b2f7-26966fc085d9\") " Feb 19 10:26:44 crc kubenswrapper[4811]: I0219 10:26:44.545256 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/774737fd-2783-4012-b2f7-26966fc085d9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "774737fd-2783-4012-b2f7-26966fc085d9" (UID: "774737fd-2783-4012-b2f7-26966fc085d9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:26:44 crc kubenswrapper[4811]: I0219 10:26:44.549971 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/774737fd-2783-4012-b2f7-26966fc085d9-kube-api-access-8q68l" (OuterVolumeSpecName: "kube-api-access-8q68l") pod "774737fd-2783-4012-b2f7-26966fc085d9" (UID: "774737fd-2783-4012-b2f7-26966fc085d9"). InnerVolumeSpecName "kube-api-access-8q68l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:26:44 crc kubenswrapper[4811]: I0219 10:26:44.598252 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-lwl9w"] Feb 19 10:26:44 crc kubenswrapper[4811]: I0219 10:26:44.646113 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8q68l\" (UniqueName: \"kubernetes.io/projected/774737fd-2783-4012-b2f7-26966fc085d9-kube-api-access-8q68l\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:44 crc kubenswrapper[4811]: I0219 10:26:44.646150 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/774737fd-2783-4012-b2f7-26966fc085d9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:45 crc kubenswrapper[4811]: I0219 10:26:45.092487 4811 generic.go:334] "Generic (PLEG): container finished" podID="15311a54-a0df-4c3d-b58f-384839b49b34" containerID="cda29eb6030af822ff4fce6213a5a7ba4f3fbd331e2feebf60e0c57ac95fc6bf" exitCode=0 Feb 19 10:26:45 crc kubenswrapper[4811]: I0219 10:26:45.092867 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-lwl9w" event={"ID":"15311a54-a0df-4c3d-b58f-384839b49b34","Type":"ContainerDied","Data":"cda29eb6030af822ff4fce6213a5a7ba4f3fbd331e2feebf60e0c57ac95fc6bf"} Feb 19 10:26:45 crc kubenswrapper[4811]: I0219 10:26:45.092902 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-lwl9w" event={"ID":"15311a54-a0df-4c3d-b58f-384839b49b34","Type":"ContainerStarted","Data":"b89b052b6d51778cc78e423573acc6b17528770c64cda27332bd2fbdde968c3f"} Feb 19 10:26:45 crc kubenswrapper[4811]: I0219 10:26:45.098268 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wq4nk" event={"ID":"774737fd-2783-4012-b2f7-26966fc085d9","Type":"ContainerDied","Data":"ba786453c459e897da9be0fd5b14d0726fc392354ef8a89984c114199448447f"} Feb 19 10:26:45 crc kubenswrapper[4811]: I0219 10:26:45.098327 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba786453c459e897da9be0fd5b14d0726fc392354ef8a89984c114199448447f" Feb 19 10:26:45 crc kubenswrapper[4811]: I0219 10:26:45.098423 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wq4nk" Feb 19 10:26:46 crc kubenswrapper[4811]: I0219 10:26:46.112381 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-lwl9w" event={"ID":"15311a54-a0df-4c3d-b58f-384839b49b34","Type":"ContainerStarted","Data":"0c42924b955521195924491430f123cf78cd53674446d731e511f62fd5c6350f"} Feb 19 10:26:46 crc kubenswrapper[4811]: I0219 10:26:46.112750 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-lwl9w" Feb 19 10:26:46 crc kubenswrapper[4811]: I0219 10:26:46.609877 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-lwl9w" podStartSLOduration=3.60985716 podStartE2EDuration="3.60985716s" podCreationTimestamp="2026-02-19 10:26:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:26:46.157693105 +0000 UTC m=+1094.684157791" watchObservedRunningTime="2026-02-19 10:26:46.60985716 +0000 UTC m=+1095.136321846" Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.134309 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8130-account-create-update-trnwt" event={"ID":"f9bab52b-f5c8-425d-bdc0-601e101eb0a4","Type":"ContainerDied","Data":"23c787d4a9ef4ce7052da72097abdf49ff5f253c41f6564f2b818deb5662edda"} Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.134356 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23c787d4a9ef4ce7052da72097abdf49ff5f253c41f6564f2b818deb5662edda" Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.138318 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-hcwkd" event={"ID":"5d02e9a5-6f2e-4754-865e-51492f4f81f7","Type":"ContainerDied","Data":"e754929551086e34798e61ae6e9aea921e320b5a99a77be10ed5111a7f9a5bec"} Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.138365 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e754929551086e34798e61ae6e9aea921e320b5a99a77be10ed5111a7f9a5bec" Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.140407 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2d23-account-create-update-bgrsz" event={"ID":"29308f94-60d2-402b-a4f0-76652537e794","Type":"ContainerDied","Data":"86a44b31f9d35a700417cb9d15d082b7c990ccbd8350aa1ceb1321938c6687c2"} Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.140494 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86a44b31f9d35a700417cb9d15d082b7c990ccbd8350aa1ceb1321938c6687c2" Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.142516 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5217-account-create-update-bdx6j" event={"ID":"479fe0ed-404c-4378-b035-e7518e74e8db","Type":"ContainerDied","Data":"0462deaed7f513a0446a1837bc768c42288ce52f4ed1b1d251ba497c457be549"} Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.142551 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0462deaed7f513a0446a1837bc768c42288ce52f4ed1b1d251ba497c457be549" Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.144482 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xlmv4" event={"ID":"3390f3ef-b993-411b-85ff-e90bb430960c","Type":"ContainerDied","Data":"eb1ea409a17e46d2202c3d704b7c5d51f91777151f1c999e4308c4d676dea67e"} Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.144510 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb1ea409a17e46d2202c3d704b7c5d51f91777151f1c999e4308c4d676dea67e" Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.337839 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5217-account-create-update-bdx6j" Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.375846 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8130-account-create-update-trnwt" Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.409640 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2d23-account-create-update-bgrsz" Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.419574 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hcwkd" Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.425064 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xlmv4" Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.436615 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxdrp\" (UniqueName: \"kubernetes.io/projected/479fe0ed-404c-4378-b035-e7518e74e8db-kube-api-access-vxdrp\") pod \"479fe0ed-404c-4378-b035-e7518e74e8db\" (UID: \"479fe0ed-404c-4378-b035-e7518e74e8db\") " Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.436822 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/479fe0ed-404c-4378-b035-e7518e74e8db-operator-scripts\") pod \"479fe0ed-404c-4378-b035-e7518e74e8db\" (UID: \"479fe0ed-404c-4378-b035-e7518e74e8db\") " Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.438230 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/479fe0ed-404c-4378-b035-e7518e74e8db-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "479fe0ed-404c-4378-b035-e7518e74e8db" (UID: "479fe0ed-404c-4378-b035-e7518e74e8db"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.443732 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/479fe0ed-404c-4378-b035-e7518e74e8db-kube-api-access-vxdrp" (OuterVolumeSpecName: "kube-api-access-vxdrp") pod "479fe0ed-404c-4378-b035-e7518e74e8db" (UID: "479fe0ed-404c-4378-b035-e7518e74e8db"). InnerVolumeSpecName "kube-api-access-vxdrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.538057 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqv27\" (UniqueName: \"kubernetes.io/projected/f9bab52b-f5c8-425d-bdc0-601e101eb0a4-kube-api-access-sqv27\") pod \"f9bab52b-f5c8-425d-bdc0-601e101eb0a4\" (UID: \"f9bab52b-f5c8-425d-bdc0-601e101eb0a4\") " Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.538143 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9bab52b-f5c8-425d-bdc0-601e101eb0a4-operator-scripts\") pod \"f9bab52b-f5c8-425d-bdc0-601e101eb0a4\" (UID: \"f9bab52b-f5c8-425d-bdc0-601e101eb0a4\") " Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.538246 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3390f3ef-b993-411b-85ff-e90bb430960c-operator-scripts\") pod \"3390f3ef-b993-411b-85ff-e90bb430960c\" (UID: \"3390f3ef-b993-411b-85ff-e90bb430960c\") " Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.538273 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d02e9a5-6f2e-4754-865e-51492f4f81f7-operator-scripts\") pod \"5d02e9a5-6f2e-4754-865e-51492f4f81f7\" (UID: \"5d02e9a5-6f2e-4754-865e-51492f4f81f7\") " Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.538299 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfcm4\" (UniqueName: \"kubernetes.io/projected/5d02e9a5-6f2e-4754-865e-51492f4f81f7-kube-api-access-sfcm4\") pod \"5d02e9a5-6f2e-4754-865e-51492f4f81f7\" (UID: \"5d02e9a5-6f2e-4754-865e-51492f4f81f7\") " Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.538338 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nh7x\" (UniqueName: \"kubernetes.io/projected/3390f3ef-b993-411b-85ff-e90bb430960c-kube-api-access-9nh7x\") pod \"3390f3ef-b993-411b-85ff-e90bb430960c\" (UID: \"3390f3ef-b993-411b-85ff-e90bb430960c\") " Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.538375 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29308f94-60d2-402b-a4f0-76652537e794-operator-scripts\") pod \"29308f94-60d2-402b-a4f0-76652537e794\" (UID: \"29308f94-60d2-402b-a4f0-76652537e794\") " Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.538442 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxf4d\" (UniqueName: \"kubernetes.io/projected/29308f94-60d2-402b-a4f0-76652537e794-kube-api-access-dxf4d\") pod \"29308f94-60d2-402b-a4f0-76652537e794\" (UID: \"29308f94-60d2-402b-a4f0-76652537e794\") " Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.538907 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d02e9a5-6f2e-4754-865e-51492f4f81f7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5d02e9a5-6f2e-4754-865e-51492f4f81f7" (UID: "5d02e9a5-6f2e-4754-865e-51492f4f81f7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.538992 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxdrp\" (UniqueName: \"kubernetes.io/projected/479fe0ed-404c-4378-b035-e7518e74e8db-kube-api-access-vxdrp\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.538995 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3390f3ef-b993-411b-85ff-e90bb430960c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3390f3ef-b993-411b-85ff-e90bb430960c" (UID: "3390f3ef-b993-411b-85ff-e90bb430960c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.539011 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/479fe0ed-404c-4378-b035-e7518e74e8db-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.539055 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9bab52b-f5c8-425d-bdc0-601e101eb0a4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f9bab52b-f5c8-425d-bdc0-601e101eb0a4" (UID: "f9bab52b-f5c8-425d-bdc0-601e101eb0a4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.539298 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29308f94-60d2-402b-a4f0-76652537e794-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "29308f94-60d2-402b-a4f0-76652537e794" (UID: "29308f94-60d2-402b-a4f0-76652537e794"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.543063 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d02e9a5-6f2e-4754-865e-51492f4f81f7-kube-api-access-sfcm4" (OuterVolumeSpecName: "kube-api-access-sfcm4") pod "5d02e9a5-6f2e-4754-865e-51492f4f81f7" (UID: "5d02e9a5-6f2e-4754-865e-51492f4f81f7"). InnerVolumeSpecName "kube-api-access-sfcm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.543651 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9bab52b-f5c8-425d-bdc0-601e101eb0a4-kube-api-access-sqv27" (OuterVolumeSpecName: "kube-api-access-sqv27") pod "f9bab52b-f5c8-425d-bdc0-601e101eb0a4" (UID: "f9bab52b-f5c8-425d-bdc0-601e101eb0a4"). InnerVolumeSpecName "kube-api-access-sqv27". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.543861 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3390f3ef-b993-411b-85ff-e90bb430960c-kube-api-access-9nh7x" (OuterVolumeSpecName: "kube-api-access-9nh7x") pod "3390f3ef-b993-411b-85ff-e90bb430960c" (UID: "3390f3ef-b993-411b-85ff-e90bb430960c"). InnerVolumeSpecName "kube-api-access-9nh7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.543935 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29308f94-60d2-402b-a4f0-76652537e794-kube-api-access-dxf4d" (OuterVolumeSpecName: "kube-api-access-dxf4d") pod "29308f94-60d2-402b-a4f0-76652537e794" (UID: "29308f94-60d2-402b-a4f0-76652537e794"). InnerVolumeSpecName "kube-api-access-dxf4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.640906 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3390f3ef-b993-411b-85ff-e90bb430960c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.640946 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d02e9a5-6f2e-4754-865e-51492f4f81f7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.640957 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfcm4\" (UniqueName: \"kubernetes.io/projected/5d02e9a5-6f2e-4754-865e-51492f4f81f7-kube-api-access-sfcm4\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.640969 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nh7x\" (UniqueName: \"kubernetes.io/projected/3390f3ef-b993-411b-85ff-e90bb430960c-kube-api-access-9nh7x\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.640978 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29308f94-60d2-402b-a4f0-76652537e794-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.640988 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxf4d\" (UniqueName: \"kubernetes.io/projected/29308f94-60d2-402b-a4f0-76652537e794-kube-api-access-dxf4d\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.641004 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqv27\" (UniqueName: \"kubernetes.io/projected/f9bab52b-f5c8-425d-bdc0-601e101eb0a4-kube-api-access-sqv27\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:48 crc kubenswrapper[4811]: I0219 10:26:48.641013 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9bab52b-f5c8-425d-bdc0-601e101eb0a4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:49 crc kubenswrapper[4811]: I0219 10:26:49.157818 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-xq7x5" event={"ID":"9c179a32-5560-4ed7-9e71-aaf2f196fb8e","Type":"ContainerStarted","Data":"6d889ee7739938850441777cdaca73b11578444dfbefde99c16ebfef6aa5eb5b"} Feb 19 10:26:49 crc kubenswrapper[4811]: I0219 10:26:49.160053 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hcwkd" Feb 19 10:26:49 crc kubenswrapper[4811]: I0219 10:26:49.160143 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xlmv4" Feb 19 10:26:49 crc kubenswrapper[4811]: I0219 10:26:49.160136 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5217-account-create-update-bdx6j" Feb 19 10:26:49 crc kubenswrapper[4811]: I0219 10:26:49.160053 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-z4vh6" event={"ID":"a9133229-76e0-4848-b61e-f70c53216929","Type":"ContainerStarted","Data":"3ebe304eba92b1c15fdd3e06679060c5a4e36e380b5c3c66b49a6e27eb8abe44"} Feb 19 10:26:49 crc kubenswrapper[4811]: I0219 10:26:49.160213 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2d23-account-create-update-bgrsz" Feb 19 10:26:49 crc kubenswrapper[4811]: I0219 10:26:49.160284 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8130-account-create-update-trnwt" Feb 19 10:26:49 crc kubenswrapper[4811]: I0219 10:26:49.184380 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-xq7x5" podStartSLOduration=2.785208089 podStartE2EDuration="8.18435897s" podCreationTimestamp="2026-02-19 10:26:41 +0000 UTC" firstStartedPulling="2026-02-19 10:26:42.797830889 +0000 UTC m=+1091.324295575" lastFinishedPulling="2026-02-19 10:26:48.19698176 +0000 UTC m=+1096.723446456" observedRunningTime="2026-02-19 10:26:49.181345383 +0000 UTC m=+1097.707810089" watchObservedRunningTime="2026-02-19 10:26:49.18435897 +0000 UTC m=+1097.710823656" Feb 19 10:26:49 crc kubenswrapper[4811]: I0219 10:26:49.215064 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-z4vh6" podStartSLOduration=2.955106582 podStartE2EDuration="31.215005292s" podCreationTimestamp="2026-02-19 10:26:18 +0000 UTC" firstStartedPulling="2026-02-19 10:26:19.928007689 +0000 UTC m=+1068.454472375" lastFinishedPulling="2026-02-19 10:26:48.187906399 +0000 UTC m=+1096.714371085" observedRunningTime="2026-02-19 10:26:49.21336874 +0000 UTC m=+1097.739833436" watchObservedRunningTime="2026-02-19 10:26:49.215005292 +0000 UTC m=+1097.741469978" Feb 19 10:26:53 crc kubenswrapper[4811]: I0219 10:26:53.197644 4811 generic.go:334] "Generic (PLEG): container finished" podID="9c179a32-5560-4ed7-9e71-aaf2f196fb8e" containerID="6d889ee7739938850441777cdaca73b11578444dfbefde99c16ebfef6aa5eb5b" exitCode=0 Feb 19 10:26:53 crc kubenswrapper[4811]: I0219 10:26:53.197747 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-xq7x5" event={"ID":"9c179a32-5560-4ed7-9e71-aaf2f196fb8e","Type":"ContainerDied","Data":"6d889ee7739938850441777cdaca73b11578444dfbefde99c16ebfef6aa5eb5b"} Feb 19 10:26:53 crc kubenswrapper[4811]: I0219 10:26:53.990169 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-764c5664d7-lwl9w" Feb 19 10:26:54 crc kubenswrapper[4811]: I0219 10:26:54.052535 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7qmw4"] Feb 19 10:26:54 crc kubenswrapper[4811]: I0219 10:26:54.052796 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-7qmw4" podUID="5f02bee8-3f33-4fff-9d32-5bbd8463e842" containerName="dnsmasq-dns" containerID="cri-o://8470527caf2dcba0a0c9b4a578bffec3fbfc9426f77526ec80f2c9c0d268ed4d" gracePeriod=10 Feb 19 10:26:54 crc kubenswrapper[4811]: I0219 10:26:54.223818 4811 generic.go:334] "Generic (PLEG): container finished" podID="5f02bee8-3f33-4fff-9d32-5bbd8463e842" containerID="8470527caf2dcba0a0c9b4a578bffec3fbfc9426f77526ec80f2c9c0d268ed4d" exitCode=0 Feb 19 10:26:54 crc kubenswrapper[4811]: I0219 10:26:54.223917 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7qmw4" event={"ID":"5f02bee8-3f33-4fff-9d32-5bbd8463e842","Type":"ContainerDied","Data":"8470527caf2dcba0a0c9b4a578bffec3fbfc9426f77526ec80f2c9c0d268ed4d"} Feb 19 10:26:54 crc kubenswrapper[4811]: I0219 10:26:54.641162 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-xq7x5" Feb 19 10:26:54 crc kubenswrapper[4811]: I0219 10:26:54.782595 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c179a32-5560-4ed7-9e71-aaf2f196fb8e-config-data\") pod \"9c179a32-5560-4ed7-9e71-aaf2f196fb8e\" (UID: \"9c179a32-5560-4ed7-9e71-aaf2f196fb8e\") " Feb 19 10:26:54 crc kubenswrapper[4811]: I0219 10:26:54.782674 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c179a32-5560-4ed7-9e71-aaf2f196fb8e-combined-ca-bundle\") pod \"9c179a32-5560-4ed7-9e71-aaf2f196fb8e\" (UID: \"9c179a32-5560-4ed7-9e71-aaf2f196fb8e\") " Feb 19 10:26:54 crc kubenswrapper[4811]: I0219 10:26:54.782800 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpbr9\" (UniqueName: \"kubernetes.io/projected/9c179a32-5560-4ed7-9e71-aaf2f196fb8e-kube-api-access-dpbr9\") pod \"9c179a32-5560-4ed7-9e71-aaf2f196fb8e\" (UID: \"9c179a32-5560-4ed7-9e71-aaf2f196fb8e\") " Feb 19 10:26:54 crc kubenswrapper[4811]: I0219 10:26:54.803141 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c179a32-5560-4ed7-9e71-aaf2f196fb8e-kube-api-access-dpbr9" (OuterVolumeSpecName: "kube-api-access-dpbr9") pod "9c179a32-5560-4ed7-9e71-aaf2f196fb8e" (UID: "9c179a32-5560-4ed7-9e71-aaf2f196fb8e"). InnerVolumeSpecName "kube-api-access-dpbr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:26:54 crc kubenswrapper[4811]: I0219 10:26:54.813761 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c179a32-5560-4ed7-9e71-aaf2f196fb8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c179a32-5560-4ed7-9e71-aaf2f196fb8e" (UID: "9c179a32-5560-4ed7-9e71-aaf2f196fb8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:26:54 crc kubenswrapper[4811]: I0219 10:26:54.869038 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c179a32-5560-4ed7-9e71-aaf2f196fb8e-config-data" (OuterVolumeSpecName: "config-data") pod "9c179a32-5560-4ed7-9e71-aaf2f196fb8e" (UID: "9c179a32-5560-4ed7-9e71-aaf2f196fb8e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:26:54 crc kubenswrapper[4811]: I0219 10:26:54.886277 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c179a32-5560-4ed7-9e71-aaf2f196fb8e-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:54 crc kubenswrapper[4811]: I0219 10:26:54.886337 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c179a32-5560-4ed7-9e71-aaf2f196fb8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:54 crc kubenswrapper[4811]: I0219 10:26:54.886358 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpbr9\" (UniqueName: \"kubernetes.io/projected/9c179a32-5560-4ed7-9e71-aaf2f196fb8e-kube-api-access-dpbr9\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:54 crc kubenswrapper[4811]: I0219 10:26:54.940412 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-7qmw4" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.089288 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f02bee8-3f33-4fff-9d32-5bbd8463e842-dns-svc\") pod \"5f02bee8-3f33-4fff-9d32-5bbd8463e842\" (UID: \"5f02bee8-3f33-4fff-9d32-5bbd8463e842\") " Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.089375 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f02bee8-3f33-4fff-9d32-5bbd8463e842-ovsdbserver-nb\") pod \"5f02bee8-3f33-4fff-9d32-5bbd8463e842\" (UID: \"5f02bee8-3f33-4fff-9d32-5bbd8463e842\") " Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.089417 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6hxn\" (UniqueName: \"kubernetes.io/projected/5f02bee8-3f33-4fff-9d32-5bbd8463e842-kube-api-access-v6hxn\") pod \"5f02bee8-3f33-4fff-9d32-5bbd8463e842\" (UID: \"5f02bee8-3f33-4fff-9d32-5bbd8463e842\") " Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.089526 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f02bee8-3f33-4fff-9d32-5bbd8463e842-config\") pod \"5f02bee8-3f33-4fff-9d32-5bbd8463e842\" (UID: \"5f02bee8-3f33-4fff-9d32-5bbd8463e842\") " Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.089567 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f02bee8-3f33-4fff-9d32-5bbd8463e842-ovsdbserver-sb\") pod \"5f02bee8-3f33-4fff-9d32-5bbd8463e842\" (UID: \"5f02bee8-3f33-4fff-9d32-5bbd8463e842\") " Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.095636 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f02bee8-3f33-4fff-9d32-5bbd8463e842-kube-api-access-v6hxn" (OuterVolumeSpecName: "kube-api-access-v6hxn") pod "5f02bee8-3f33-4fff-9d32-5bbd8463e842" (UID: "5f02bee8-3f33-4fff-9d32-5bbd8463e842"). InnerVolumeSpecName "kube-api-access-v6hxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.133695 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f02bee8-3f33-4fff-9d32-5bbd8463e842-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5f02bee8-3f33-4fff-9d32-5bbd8463e842" (UID: "5f02bee8-3f33-4fff-9d32-5bbd8463e842"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.135380 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f02bee8-3f33-4fff-9d32-5bbd8463e842-config" (OuterVolumeSpecName: "config") pod "5f02bee8-3f33-4fff-9d32-5bbd8463e842" (UID: "5f02bee8-3f33-4fff-9d32-5bbd8463e842"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.139487 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f02bee8-3f33-4fff-9d32-5bbd8463e842-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5f02bee8-3f33-4fff-9d32-5bbd8463e842" (UID: "5f02bee8-3f33-4fff-9d32-5bbd8463e842"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.141046 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f02bee8-3f33-4fff-9d32-5bbd8463e842-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5f02bee8-3f33-4fff-9d32-5bbd8463e842" (UID: "5f02bee8-3f33-4fff-9d32-5bbd8463e842"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.191248 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f02bee8-3f33-4fff-9d32-5bbd8463e842-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.191529 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f02bee8-3f33-4fff-9d32-5bbd8463e842-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.191612 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6hxn\" (UniqueName: \"kubernetes.io/projected/5f02bee8-3f33-4fff-9d32-5bbd8463e842-kube-api-access-v6hxn\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.191706 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f02bee8-3f33-4fff-9d32-5bbd8463e842-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.191765 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f02bee8-3f33-4fff-9d32-5bbd8463e842-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.237074 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-xq7x5" event={"ID":"9c179a32-5560-4ed7-9e71-aaf2f196fb8e","Type":"ContainerDied","Data":"5f3c14eaa88ba9eb1a3aa57116bcaa2cac191a3139bfa0041c0ea1b1d73af359"} Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.238483 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f3c14eaa88ba9eb1a3aa57116bcaa2cac191a3139bfa0041c0ea1b1d73af359" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.237135 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-xq7x5" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.239742 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7qmw4" event={"ID":"5f02bee8-3f33-4fff-9d32-5bbd8463e842","Type":"ContainerDied","Data":"a750b837ff14695aed7b8a7e95e9d2c1f35c41be99cd0a3c4c4f4dd1a9727cc7"} Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.239812 4811 scope.go:117] "RemoveContainer" containerID="8470527caf2dcba0a0c9b4a578bffec3fbfc9426f77526ec80f2c9c0d268ed4d" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.239998 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-7qmw4" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.299101 4811 scope.go:117] "RemoveContainer" containerID="6c4d603237453babd3f8a649fe46dda5e0b5fe7934efda14d04f274fbcbb64b7" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.307813 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7qmw4"] Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.319507 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7qmw4"] Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.463392 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-gzzkk"] Feb 19 10:26:55 crc kubenswrapper[4811]: E0219 10:26:55.466986 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f02bee8-3f33-4fff-9d32-5bbd8463e842" containerName="init" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.467028 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f02bee8-3f33-4fff-9d32-5bbd8463e842" containerName="init" Feb 19 10:26:55 crc kubenswrapper[4811]: E0219 10:26:55.467055 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="774737fd-2783-4012-b2f7-26966fc085d9" containerName="mariadb-database-create" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.467064 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="774737fd-2783-4012-b2f7-26966fc085d9" containerName="mariadb-database-create" Feb 19 10:26:55 crc kubenswrapper[4811]: E0219 10:26:55.467072 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29308f94-60d2-402b-a4f0-76652537e794" containerName="mariadb-account-create-update" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.467081 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="29308f94-60d2-402b-a4f0-76652537e794" containerName="mariadb-account-create-update" Feb 19 10:26:55 crc kubenswrapper[4811]: E0219 10:26:55.467093 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3390f3ef-b993-411b-85ff-e90bb430960c" containerName="mariadb-database-create" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.467100 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3390f3ef-b993-411b-85ff-e90bb430960c" containerName="mariadb-database-create" Feb 19 10:26:55 crc kubenswrapper[4811]: E0219 10:26:55.467111 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f02bee8-3f33-4fff-9d32-5bbd8463e842" containerName="dnsmasq-dns" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.467119 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f02bee8-3f33-4fff-9d32-5bbd8463e842" containerName="dnsmasq-dns" Feb 19 10:26:55 crc kubenswrapper[4811]: E0219 10:26:55.467129 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9bab52b-f5c8-425d-bdc0-601e101eb0a4" containerName="mariadb-account-create-update" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.467136 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9bab52b-f5c8-425d-bdc0-601e101eb0a4" containerName="mariadb-account-create-update" Feb 19 10:26:55 crc kubenswrapper[4811]: E0219 10:26:55.467152 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="479fe0ed-404c-4378-b035-e7518e74e8db" containerName="mariadb-account-create-update" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.467159 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="479fe0ed-404c-4378-b035-e7518e74e8db" containerName="mariadb-account-create-update" Feb 19 10:26:55 crc kubenswrapper[4811]: E0219 10:26:55.467174 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c179a32-5560-4ed7-9e71-aaf2f196fb8e" containerName="keystone-db-sync" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.467181 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c179a32-5560-4ed7-9e71-aaf2f196fb8e" containerName="keystone-db-sync" Feb 19 10:26:55 crc kubenswrapper[4811]: E0219 10:26:55.467195 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d02e9a5-6f2e-4754-865e-51492f4f81f7" containerName="mariadb-database-create" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.467201 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d02e9a5-6f2e-4754-865e-51492f4f81f7" containerName="mariadb-database-create" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.467652 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="3390f3ef-b993-411b-85ff-e90bb430960c" containerName="mariadb-database-create" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.467674 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="774737fd-2783-4012-b2f7-26966fc085d9" containerName="mariadb-database-create" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.467683 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c179a32-5560-4ed7-9e71-aaf2f196fb8e" containerName="keystone-db-sync" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.467692 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f02bee8-3f33-4fff-9d32-5bbd8463e842" containerName="dnsmasq-dns" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.467701 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9bab52b-f5c8-425d-bdc0-601e101eb0a4" containerName="mariadb-account-create-update" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.467714 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="479fe0ed-404c-4378-b035-e7518e74e8db" containerName="mariadb-account-create-update" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.467727 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d02e9a5-6f2e-4754-865e-51492f4f81f7" containerName="mariadb-database-create" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.467733 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="29308f94-60d2-402b-a4f0-76652537e794" containerName="mariadb-account-create-update" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.468836 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-gzzkk" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.488308 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-gzzkk"] Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.532546 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-lgcp6"] Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.534029 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lgcp6" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.536987 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.537265 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-znphl" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.541701 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.541748 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.541925 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lgcp6"] Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.542197 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.616656 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8sb4\" (UniqueName: \"kubernetes.io/projected/c3cc88aa-0785-4d47-88d3-ea590bbf33f4-kube-api-access-l8sb4\") pod \"dnsmasq-dns-5959f8865f-gzzkk\" (UID: \"c3cc88aa-0785-4d47-88d3-ea590bbf33f4\") " pod="openstack/dnsmasq-dns-5959f8865f-gzzkk" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.618779 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3cc88aa-0785-4d47-88d3-ea590bbf33f4-dns-svc\") pod \"dnsmasq-dns-5959f8865f-gzzkk\" (UID: \"c3cc88aa-0785-4d47-88d3-ea590bbf33f4\") " pod="openstack/dnsmasq-dns-5959f8865f-gzzkk" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.619031 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3cc88aa-0785-4d47-88d3-ea590bbf33f4-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-gzzkk\" (UID: \"c3cc88aa-0785-4d47-88d3-ea590bbf33f4\") " pod="openstack/dnsmasq-dns-5959f8865f-gzzkk" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.619184 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3cc88aa-0785-4d47-88d3-ea590bbf33f4-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-gzzkk\" (UID: \"c3cc88aa-0785-4d47-88d3-ea590bbf33f4\") " pod="openstack/dnsmasq-dns-5959f8865f-gzzkk" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.619472 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3cc88aa-0785-4d47-88d3-ea590bbf33f4-config\") pod \"dnsmasq-dns-5959f8865f-gzzkk\" (UID: \"c3cc88aa-0785-4d47-88d3-ea590bbf33f4\") " pod="openstack/dnsmasq-dns-5959f8865f-gzzkk" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.619650 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3cc88aa-0785-4d47-88d3-ea590bbf33f4-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-gzzkk\" (UID: \"c3cc88aa-0785-4d47-88d3-ea590bbf33f4\") " pod="openstack/dnsmasq-dns-5959f8865f-gzzkk" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.700038 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-758d556dd9-lgxtx"] Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.701578 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-758d556dd9-lgxtx" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.707366 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.709081 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-758d556dd9-lgxtx"] Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.712878 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-78qcv" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.713015 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.713057 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.727781 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3cc88aa-0785-4d47-88d3-ea590bbf33f4-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-gzzkk\" (UID: \"c3cc88aa-0785-4d47-88d3-ea590bbf33f4\") " pod="openstack/dnsmasq-dns-5959f8865f-gzzkk" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.727845 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9bf7991a-1995-463e-8964-139036b7e6d5-credential-keys\") pod \"keystone-bootstrap-lgcp6\" (UID: \"9bf7991a-1995-463e-8964-139036b7e6d5\") " pod="openstack/keystone-bootstrap-lgcp6" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.727967 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8sb4\" (UniqueName: \"kubernetes.io/projected/c3cc88aa-0785-4d47-88d3-ea590bbf33f4-kube-api-access-l8sb4\") pod \"dnsmasq-dns-5959f8865f-gzzkk\" (UID: \"c3cc88aa-0785-4d47-88d3-ea590bbf33f4\") " pod="openstack/dnsmasq-dns-5959f8865f-gzzkk" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.727997 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9bf7991a-1995-463e-8964-139036b7e6d5-fernet-keys\") pod \"keystone-bootstrap-lgcp6\" (UID: \"9bf7991a-1995-463e-8964-139036b7e6d5\") " pod="openstack/keystone-bootstrap-lgcp6" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.728024 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9bf7991a-1995-463e-8964-139036b7e6d5-scripts\") pod \"keystone-bootstrap-lgcp6\" (UID: \"9bf7991a-1995-463e-8964-139036b7e6d5\") " pod="openstack/keystone-bootstrap-lgcp6" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.728082 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrdgq\" (UniqueName: \"kubernetes.io/projected/9bf7991a-1995-463e-8964-139036b7e6d5-kube-api-access-zrdgq\") pod \"keystone-bootstrap-lgcp6\" (UID: \"9bf7991a-1995-463e-8964-139036b7e6d5\") " pod="openstack/keystone-bootstrap-lgcp6" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.728135 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3cc88aa-0785-4d47-88d3-ea590bbf33f4-dns-svc\") pod \"dnsmasq-dns-5959f8865f-gzzkk\" (UID: \"c3cc88aa-0785-4d47-88d3-ea590bbf33f4\") " pod="openstack/dnsmasq-dns-5959f8865f-gzzkk" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.728178 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bf7991a-1995-463e-8964-139036b7e6d5-config-data\") pod \"keystone-bootstrap-lgcp6\" (UID: \"9bf7991a-1995-463e-8964-139036b7e6d5\") " pod="openstack/keystone-bootstrap-lgcp6" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.728212 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3cc88aa-0785-4d47-88d3-ea590bbf33f4-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-gzzkk\" (UID: \"c3cc88aa-0785-4d47-88d3-ea590bbf33f4\") " pod="openstack/dnsmasq-dns-5959f8865f-gzzkk" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.728242 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3cc88aa-0785-4d47-88d3-ea590bbf33f4-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-gzzkk\" (UID: \"c3cc88aa-0785-4d47-88d3-ea590bbf33f4\") " pod="openstack/dnsmasq-dns-5959f8865f-gzzkk" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.728302 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bf7991a-1995-463e-8964-139036b7e6d5-combined-ca-bundle\") pod \"keystone-bootstrap-lgcp6\" (UID: \"9bf7991a-1995-463e-8964-139036b7e6d5\") " pod="openstack/keystone-bootstrap-lgcp6" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.728342 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3cc88aa-0785-4d47-88d3-ea590bbf33f4-config\") pod \"dnsmasq-dns-5959f8865f-gzzkk\" (UID: \"c3cc88aa-0785-4d47-88d3-ea590bbf33f4\") " pod="openstack/dnsmasq-dns-5959f8865f-gzzkk" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.730312 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3cc88aa-0785-4d47-88d3-ea590bbf33f4-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-gzzkk\" (UID: \"c3cc88aa-0785-4d47-88d3-ea590bbf33f4\") " pod="openstack/dnsmasq-dns-5959f8865f-gzzkk" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.745733 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3cc88aa-0785-4d47-88d3-ea590bbf33f4-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-gzzkk\" (UID: \"c3cc88aa-0785-4d47-88d3-ea590bbf33f4\") " pod="openstack/dnsmasq-dns-5959f8865f-gzzkk" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.746224 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3cc88aa-0785-4d47-88d3-ea590bbf33f4-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-gzzkk\" (UID: \"c3cc88aa-0785-4d47-88d3-ea590bbf33f4\") " pod="openstack/dnsmasq-dns-5959f8865f-gzzkk" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.748603 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3cc88aa-0785-4d47-88d3-ea590bbf33f4-config\") pod \"dnsmasq-dns-5959f8865f-gzzkk\" (UID: \"c3cc88aa-0785-4d47-88d3-ea590bbf33f4\") " pod="openstack/dnsmasq-dns-5959f8865f-gzzkk" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.748699 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3cc88aa-0785-4d47-88d3-ea590bbf33f4-dns-svc\") pod \"dnsmasq-dns-5959f8865f-gzzkk\" (UID: \"c3cc88aa-0785-4d47-88d3-ea590bbf33f4\") " pod="openstack/dnsmasq-dns-5959f8865f-gzzkk" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.794932 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8sb4\" (UniqueName: \"kubernetes.io/projected/c3cc88aa-0785-4d47-88d3-ea590bbf33f4-kube-api-access-l8sb4\") pod \"dnsmasq-dns-5959f8865f-gzzkk\" (UID: \"c3cc88aa-0785-4d47-88d3-ea590bbf33f4\") " pod="openstack/dnsmasq-dns-5959f8865f-gzzkk" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.818696 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-gzzkk" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.820230 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-rqgzq"] Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.821260 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rqgzq" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.833803 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bf7991a-1995-463e-8964-139036b7e6d5-combined-ca-bundle\") pod \"keystone-bootstrap-lgcp6\" (UID: \"9bf7991a-1995-463e-8964-139036b7e6d5\") " pod="openstack/keystone-bootstrap-lgcp6" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.833852 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89f60543-0966-4b2a-b0d0-beb838040f9e-scripts\") pod \"horizon-758d556dd9-lgxtx\" (UID: \"89f60543-0966-4b2a-b0d0-beb838040f9e\") " pod="openstack/horizon-758d556dd9-lgxtx" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.833880 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9bf7991a-1995-463e-8964-139036b7e6d5-credential-keys\") pod \"keystone-bootstrap-lgcp6\" (UID: \"9bf7991a-1995-463e-8964-139036b7e6d5\") " pod="openstack/keystone-bootstrap-lgcp6" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.833937 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9bf7991a-1995-463e-8964-139036b7e6d5-fernet-keys\") pod \"keystone-bootstrap-lgcp6\" (UID: \"9bf7991a-1995-463e-8964-139036b7e6d5\") " pod="openstack/keystone-bootstrap-lgcp6" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.833955 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89f60543-0966-4b2a-b0d0-beb838040f9e-logs\") pod \"horizon-758d556dd9-lgxtx\" (UID: \"89f60543-0966-4b2a-b0d0-beb838040f9e\") " pod="openstack/horizon-758d556dd9-lgxtx" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.833975 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9bf7991a-1995-463e-8964-139036b7e6d5-scripts\") pod \"keystone-bootstrap-lgcp6\" (UID: \"9bf7991a-1995-463e-8964-139036b7e6d5\") " pod="openstack/keystone-bootstrap-lgcp6" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.834003 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84d7t\" (UniqueName: \"kubernetes.io/projected/89f60543-0966-4b2a-b0d0-beb838040f9e-kube-api-access-84d7t\") pod \"horizon-758d556dd9-lgxtx\" (UID: \"89f60543-0966-4b2a-b0d0-beb838040f9e\") " pod="openstack/horizon-758d556dd9-lgxtx" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.834033 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrdgq\" (UniqueName: \"kubernetes.io/projected/9bf7991a-1995-463e-8964-139036b7e6d5-kube-api-access-zrdgq\") pod \"keystone-bootstrap-lgcp6\" (UID: \"9bf7991a-1995-463e-8964-139036b7e6d5\") " pod="openstack/keystone-bootstrap-lgcp6" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.834068 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bf7991a-1995-463e-8964-139036b7e6d5-config-data\") pod \"keystone-bootstrap-lgcp6\" (UID: \"9bf7991a-1995-463e-8964-139036b7e6d5\") " pod="openstack/keystone-bootstrap-lgcp6" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.834088 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/89f60543-0966-4b2a-b0d0-beb838040f9e-horizon-secret-key\") pod \"horizon-758d556dd9-lgxtx\" (UID: \"89f60543-0966-4b2a-b0d0-beb838040f9e\") " pod="openstack/horizon-758d556dd9-lgxtx" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.834117 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89f60543-0966-4b2a-b0d0-beb838040f9e-config-data\") pod \"horizon-758d556dd9-lgxtx\" (UID: \"89f60543-0966-4b2a-b0d0-beb838040f9e\") " pod="openstack/horizon-758d556dd9-lgxtx" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.851605 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bf7991a-1995-463e-8964-139036b7e6d5-combined-ca-bundle\") pod \"keystone-bootstrap-lgcp6\" (UID: \"9bf7991a-1995-463e-8964-139036b7e6d5\") " pod="openstack/keystone-bootstrap-lgcp6" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.871471 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9bf7991a-1995-463e-8964-139036b7e6d5-credential-keys\") pod \"keystone-bootstrap-lgcp6\" (UID: \"9bf7991a-1995-463e-8964-139036b7e6d5\") " pod="openstack/keystone-bootstrap-lgcp6" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.872966 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9bf7991a-1995-463e-8964-139036b7e6d5-fernet-keys\") pod \"keystone-bootstrap-lgcp6\" (UID: \"9bf7991a-1995-463e-8964-139036b7e6d5\") " pod="openstack/keystone-bootstrap-lgcp6" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.880542 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-rfbsz" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.880806 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.880869 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9bf7991a-1995-463e-8964-139036b7e6d5-scripts\") pod \"keystone-bootstrap-lgcp6\" (UID: \"9bf7991a-1995-463e-8964-139036b7e6d5\") " pod="openstack/keystone-bootstrap-lgcp6" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.880982 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.899755 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-rqgzq"] Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.922420 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bf7991a-1995-463e-8964-139036b7e6d5-config-data\") pod \"keystone-bootstrap-lgcp6\" (UID: \"9bf7991a-1995-463e-8964-139036b7e6d5\") " pod="openstack/keystone-bootstrap-lgcp6" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.936104 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84d7t\" (UniqueName: \"kubernetes.io/projected/89f60543-0966-4b2a-b0d0-beb838040f9e-kube-api-access-84d7t\") pod \"horizon-758d556dd9-lgxtx\" (UID: \"89f60543-0966-4b2a-b0d0-beb838040f9e\") " pod="openstack/horizon-758d556dd9-lgxtx" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.936206 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/89f60543-0966-4b2a-b0d0-beb838040f9e-horizon-secret-key\") pod \"horizon-758d556dd9-lgxtx\" (UID: \"89f60543-0966-4b2a-b0d0-beb838040f9e\") " pod="openstack/horizon-758d556dd9-lgxtx" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.936237 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7495v\" (UniqueName: \"kubernetes.io/projected/d369ea05-6a36-44fd-bdd0-26eead9ba16d-kube-api-access-7495v\") pod \"cinder-db-sync-rqgzq\" (UID: \"d369ea05-6a36-44fd-bdd0-26eead9ba16d\") " pod="openstack/cinder-db-sync-rqgzq" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.936260 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d369ea05-6a36-44fd-bdd0-26eead9ba16d-config-data\") pod \"cinder-db-sync-rqgzq\" (UID: \"d369ea05-6a36-44fd-bdd0-26eead9ba16d\") " pod="openstack/cinder-db-sync-rqgzq" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.936282 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d369ea05-6a36-44fd-bdd0-26eead9ba16d-db-sync-config-data\") pod \"cinder-db-sync-rqgzq\" (UID: \"d369ea05-6a36-44fd-bdd0-26eead9ba16d\") " pod="openstack/cinder-db-sync-rqgzq" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.936310 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89f60543-0966-4b2a-b0d0-beb838040f9e-config-data\") pod \"horizon-758d556dd9-lgxtx\" (UID: \"89f60543-0966-4b2a-b0d0-beb838040f9e\") " pod="openstack/horizon-758d556dd9-lgxtx" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.936331 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d369ea05-6a36-44fd-bdd0-26eead9ba16d-combined-ca-bundle\") pod \"cinder-db-sync-rqgzq\" (UID: \"d369ea05-6a36-44fd-bdd0-26eead9ba16d\") " pod="openstack/cinder-db-sync-rqgzq" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.936356 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d369ea05-6a36-44fd-bdd0-26eead9ba16d-scripts\") pod \"cinder-db-sync-rqgzq\" (UID: \"d369ea05-6a36-44fd-bdd0-26eead9ba16d\") " pod="openstack/cinder-db-sync-rqgzq" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.936388 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89f60543-0966-4b2a-b0d0-beb838040f9e-scripts\") pod \"horizon-758d556dd9-lgxtx\" (UID: \"89f60543-0966-4b2a-b0d0-beb838040f9e\") " pod="openstack/horizon-758d556dd9-lgxtx" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.936421 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d369ea05-6a36-44fd-bdd0-26eead9ba16d-etc-machine-id\") pod \"cinder-db-sync-rqgzq\" (UID: \"d369ea05-6a36-44fd-bdd0-26eead9ba16d\") " pod="openstack/cinder-db-sync-rqgzq" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.936534 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89f60543-0966-4b2a-b0d0-beb838040f9e-logs\") pod \"horizon-758d556dd9-lgxtx\" (UID: \"89f60543-0966-4b2a-b0d0-beb838040f9e\") " pod="openstack/horizon-758d556dd9-lgxtx" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.937053 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89f60543-0966-4b2a-b0d0-beb838040f9e-logs\") pod \"horizon-758d556dd9-lgxtx\" (UID: \"89f60543-0966-4b2a-b0d0-beb838040f9e\") " pod="openstack/horizon-758d556dd9-lgxtx" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.940901 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89f60543-0966-4b2a-b0d0-beb838040f9e-scripts\") pod \"horizon-758d556dd9-lgxtx\" (UID: \"89f60543-0966-4b2a-b0d0-beb838040f9e\") " pod="openstack/horizon-758d556dd9-lgxtx" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.941684 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89f60543-0966-4b2a-b0d0-beb838040f9e-config-data\") pod \"horizon-758d556dd9-lgxtx\" (UID: \"89f60543-0966-4b2a-b0d0-beb838040f9e\") " pod="openstack/horizon-758d556dd9-lgxtx" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.961519 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/89f60543-0966-4b2a-b0d0-beb838040f9e-horizon-secret-key\") pod \"horizon-758d556dd9-lgxtx\" (UID: \"89f60543-0966-4b2a-b0d0-beb838040f9e\") " pod="openstack/horizon-758d556dd9-lgxtx" Feb 19 10:26:55 crc kubenswrapper[4811]: I0219 10:26:55.983140 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrdgq\" (UniqueName: \"kubernetes.io/projected/9bf7991a-1995-463e-8964-139036b7e6d5-kube-api-access-zrdgq\") pod \"keystone-bootstrap-lgcp6\" (UID: \"9bf7991a-1995-463e-8964-139036b7e6d5\") " pod="openstack/keystone-bootstrap-lgcp6" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.001597 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-8skw7"] Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.001639 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84d7t\" (UniqueName: \"kubernetes.io/projected/89f60543-0966-4b2a-b0d0-beb838040f9e-kube-api-access-84d7t\") pod \"horizon-758d556dd9-lgxtx\" (UID: \"89f60543-0966-4b2a-b0d0-beb838040f9e\") " pod="openstack/horizon-758d556dd9-lgxtx" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.003342 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-8skw7" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.032543 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-8skw7"] Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.033469 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-758d556dd9-lgxtx" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.039865 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.041374 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d369ea05-6a36-44fd-bdd0-26eead9ba16d-config-data\") pod \"cinder-db-sync-rqgzq\" (UID: \"d369ea05-6a36-44fd-bdd0-26eead9ba16d\") " pod="openstack/cinder-db-sync-rqgzq" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.046754 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.046890 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7495v\" (UniqueName: \"kubernetes.io/projected/d369ea05-6a36-44fd-bdd0-26eead9ba16d-kube-api-access-7495v\") pod \"cinder-db-sync-rqgzq\" (UID: \"d369ea05-6a36-44fd-bdd0-26eead9ba16d\") " pod="openstack/cinder-db-sync-rqgzq" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.058837 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d369ea05-6a36-44fd-bdd0-26eead9ba16d-db-sync-config-data\") pod \"cinder-db-sync-rqgzq\" (UID: \"d369ea05-6a36-44fd-bdd0-26eead9ba16d\") " pod="openstack/cinder-db-sync-rqgzq" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.059038 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d369ea05-6a36-44fd-bdd0-26eead9ba16d-combined-ca-bundle\") pod \"cinder-db-sync-rqgzq\" (UID: \"d369ea05-6a36-44fd-bdd0-26eead9ba16d\") " pod="openstack/cinder-db-sync-rqgzq" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.059170 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d369ea05-6a36-44fd-bdd0-26eead9ba16d-scripts\") pod \"cinder-db-sync-rqgzq\" (UID: \"d369ea05-6a36-44fd-bdd0-26eead9ba16d\") " pod="openstack/cinder-db-sync-rqgzq" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.059340 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d369ea05-6a36-44fd-bdd0-26eead9ba16d-etc-machine-id\") pod \"cinder-db-sync-rqgzq\" (UID: \"d369ea05-6a36-44fd-bdd0-26eead9ba16d\") " pod="openstack/cinder-db-sync-rqgzq" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.059819 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d369ea05-6a36-44fd-bdd0-26eead9ba16d-etc-machine-id\") pod \"cinder-db-sync-rqgzq\" (UID: \"d369ea05-6a36-44fd-bdd0-26eead9ba16d\") " pod="openstack/cinder-db-sync-rqgzq" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.060709 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d369ea05-6a36-44fd-bdd0-26eead9ba16d-config-data\") pod \"cinder-db-sync-rqgzq\" (UID: \"d369ea05-6a36-44fd-bdd0-26eead9ba16d\") " pod="openstack/cinder-db-sync-rqgzq" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.046949 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-rkn7w" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.075946 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d369ea05-6a36-44fd-bdd0-26eead9ba16d-db-sync-config-data\") pod \"cinder-db-sync-rqgzq\" (UID: \"d369ea05-6a36-44fd-bdd0-26eead9ba16d\") " pod="openstack/cinder-db-sync-rqgzq" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.079012 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d369ea05-6a36-44fd-bdd0-26eead9ba16d-scripts\") pod \"cinder-db-sync-rqgzq\" (UID: \"d369ea05-6a36-44fd-bdd0-26eead9ba16d\") " pod="openstack/cinder-db-sync-rqgzq" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.081919 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d369ea05-6a36-44fd-bdd0-26eead9ba16d-combined-ca-bundle\") pod \"cinder-db-sync-rqgzq\" (UID: \"d369ea05-6a36-44fd-bdd0-26eead9ba16d\") " pod="openstack/cinder-db-sync-rqgzq" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.122631 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7495v\" (UniqueName: \"kubernetes.io/projected/d369ea05-6a36-44fd-bdd0-26eead9ba16d-kube-api-access-7495v\") pod \"cinder-db-sync-rqgzq\" (UID: \"d369ea05-6a36-44fd-bdd0-26eead9ba16d\") " pod="openstack/cinder-db-sync-rqgzq" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.161467 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg6cp\" (UniqueName: \"kubernetes.io/projected/1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c-kube-api-access-bg6cp\") pod \"neutron-db-sync-8skw7\" (UID: \"1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c\") " pod="openstack/neutron-db-sync-8skw7" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.161555 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c-config\") pod \"neutron-db-sync-8skw7\" (UID: \"1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c\") " pod="openstack/neutron-db-sync-8skw7" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.161657 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c-combined-ca-bundle\") pod \"neutron-db-sync-8skw7\" (UID: \"1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c\") " pod="openstack/neutron-db-sync-8skw7" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.182761 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-928sb"] Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.184128 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-928sb" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.198814 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.205111 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-tdb5v" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.205837 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.236847 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-928sb"] Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.257130 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lgcp6" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.262107 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rqgzq" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.278887 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg6cp\" (UniqueName: \"kubernetes.io/projected/1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c-kube-api-access-bg6cp\") pod \"neutron-db-sync-8skw7\" (UID: \"1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c\") " pod="openstack/neutron-db-sync-8skw7" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.278968 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c-config\") pod \"neutron-db-sync-8skw7\" (UID: \"1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c\") " pod="openstack/neutron-db-sync-8skw7" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.279131 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f091a50a-3df7-47c9-84a9-02c074138c7b-scripts\") pod \"placement-db-sync-928sb\" (UID: \"f091a50a-3df7-47c9-84a9-02c074138c7b\") " pod="openstack/placement-db-sync-928sb" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.279464 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f091a50a-3df7-47c9-84a9-02c074138c7b-config-data\") pod \"placement-db-sync-928sb\" (UID: \"f091a50a-3df7-47c9-84a9-02c074138c7b\") " pod="openstack/placement-db-sync-928sb" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.279510 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcvs4\" (UniqueName: \"kubernetes.io/projected/f091a50a-3df7-47c9-84a9-02c074138c7b-kube-api-access-gcvs4\") pod \"placement-db-sync-928sb\" (UID: \"f091a50a-3df7-47c9-84a9-02c074138c7b\") " pod="openstack/placement-db-sync-928sb" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.279639 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c-combined-ca-bundle\") pod \"neutron-db-sync-8skw7\" (UID: \"1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c\") " pod="openstack/neutron-db-sync-8skw7" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.279787 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f091a50a-3df7-47c9-84a9-02c074138c7b-combined-ca-bundle\") pod \"placement-db-sync-928sb\" (UID: \"f091a50a-3df7-47c9-84a9-02c074138c7b\") " pod="openstack/placement-db-sync-928sb" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.279956 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f091a50a-3df7-47c9-84a9-02c074138c7b-logs\") pod \"placement-db-sync-928sb\" (UID: \"f091a50a-3df7-47c9-84a9-02c074138c7b\") " pod="openstack/placement-db-sync-928sb" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.296335 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c-config\") pod \"neutron-db-sync-8skw7\" (UID: \"1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c\") " pod="openstack/neutron-db-sync-8skw7" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.310434 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c-combined-ca-bundle\") pod \"neutron-db-sync-8skw7\" (UID: \"1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c\") " pod="openstack/neutron-db-sync-8skw7" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.361501 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg6cp\" (UniqueName: \"kubernetes.io/projected/1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c-kube-api-access-bg6cp\") pod \"neutron-db-sync-8skw7\" (UID: \"1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c\") " pod="openstack/neutron-db-sync-8skw7" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.381639 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f091a50a-3df7-47c9-84a9-02c074138c7b-scripts\") pod \"placement-db-sync-928sb\" (UID: \"f091a50a-3df7-47c9-84a9-02c074138c7b\") " pod="openstack/placement-db-sync-928sb" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.382017 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f091a50a-3df7-47c9-84a9-02c074138c7b-config-data\") pod \"placement-db-sync-928sb\" (UID: \"f091a50a-3df7-47c9-84a9-02c074138c7b\") " pod="openstack/placement-db-sync-928sb" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.382133 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcvs4\" (UniqueName: \"kubernetes.io/projected/f091a50a-3df7-47c9-84a9-02c074138c7b-kube-api-access-gcvs4\") pod \"placement-db-sync-928sb\" (UID: \"f091a50a-3df7-47c9-84a9-02c074138c7b\") " pod="openstack/placement-db-sync-928sb" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.382280 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f091a50a-3df7-47c9-84a9-02c074138c7b-combined-ca-bundle\") pod \"placement-db-sync-928sb\" (UID: \"f091a50a-3df7-47c9-84a9-02c074138c7b\") " pod="openstack/placement-db-sync-928sb" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.382403 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f091a50a-3df7-47c9-84a9-02c074138c7b-logs\") pod \"placement-db-sync-928sb\" (UID: \"f091a50a-3df7-47c9-84a9-02c074138c7b\") " pod="openstack/placement-db-sync-928sb" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.385116 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f091a50a-3df7-47c9-84a9-02c074138c7b-logs\") pod \"placement-db-sync-928sb\" (UID: \"f091a50a-3df7-47c9-84a9-02c074138c7b\") " pod="openstack/placement-db-sync-928sb" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.406910 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-8skw7" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.407444 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f091a50a-3df7-47c9-84a9-02c074138c7b-combined-ca-bundle\") pod \"placement-db-sync-928sb\" (UID: \"f091a50a-3df7-47c9-84a9-02c074138c7b\") " pod="openstack/placement-db-sync-928sb" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.407865 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-xggvg"] Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.408230 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f091a50a-3df7-47c9-84a9-02c074138c7b-config-data\") pod \"placement-db-sync-928sb\" (UID: \"f091a50a-3df7-47c9-84a9-02c074138c7b\") " pod="openstack/placement-db-sync-928sb" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.415867 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f091a50a-3df7-47c9-84a9-02c074138c7b-scripts\") pod \"placement-db-sync-928sb\" (UID: \"f091a50a-3df7-47c9-84a9-02c074138c7b\") " pod="openstack/placement-db-sync-928sb" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.427119 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xggvg" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.464973 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-2jqkh" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.465154 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.499623 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcvs4\" (UniqueName: \"kubernetes.io/projected/f091a50a-3df7-47c9-84a9-02c074138c7b-kube-api-access-gcvs4\") pod \"placement-db-sync-928sb\" (UID: \"f091a50a-3df7-47c9-84a9-02c074138c7b\") " pod="openstack/placement-db-sync-928sb" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.499799 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-xggvg"] Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.532506 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-866fcfcb4f-gbvbw"] Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.533011 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-928sb" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.534173 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-866fcfcb4f-gbvbw" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.560201 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.563309 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.576245 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.576522 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.593272 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dd777ce0-fb21-405b-8619-19318d2057d4-config-data\") pod \"horizon-866fcfcb4f-gbvbw\" (UID: \"dd777ce0-fb21-405b-8619-19318d2057d4\") " pod="openstack/horizon-866fcfcb4f-gbvbw" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.593577 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9741f82c-1dce-4e95-9442-c424bcf8927d-combined-ca-bundle\") pod \"barbican-db-sync-xggvg\" (UID: \"9741f82c-1dce-4e95-9442-c424bcf8927d\") " pod="openstack/barbican-db-sync-xggvg" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.593691 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd777ce0-fb21-405b-8619-19318d2057d4-logs\") pod \"horizon-866fcfcb4f-gbvbw\" (UID: \"dd777ce0-fb21-405b-8619-19318d2057d4\") " pod="openstack/horizon-866fcfcb4f-gbvbw" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.593799 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-run-httpd\") pod \"ceilometer-0\" (UID: \"45fe4f69-83fa-452c-a5fd-c993af1d0ef9\") " pod="openstack/ceilometer-0" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.593876 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-config-data\") pod \"ceilometer-0\" (UID: \"45fe4f69-83fa-452c-a5fd-c993af1d0ef9\") " pod="openstack/ceilometer-0" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.593952 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-log-httpd\") pod \"ceilometer-0\" (UID: \"45fe4f69-83fa-452c-a5fd-c993af1d0ef9\") " pod="openstack/ceilometer-0" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.594030 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"45fe4f69-83fa-452c-a5fd-c993af1d0ef9\") " pod="openstack/ceilometer-0" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.594104 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dd777ce0-fb21-405b-8619-19318d2057d4-horizon-secret-key\") pod \"horizon-866fcfcb4f-gbvbw\" (UID: \"dd777ce0-fb21-405b-8619-19318d2057d4\") " pod="openstack/horizon-866fcfcb4f-gbvbw" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.594191 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j97vh\" (UniqueName: \"kubernetes.io/projected/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-kube-api-access-j97vh\") pod \"ceilometer-0\" (UID: \"45fe4f69-83fa-452c-a5fd-c993af1d0ef9\") " pod="openstack/ceilometer-0" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.594270 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp5bw\" (UniqueName: \"kubernetes.io/projected/dd777ce0-fb21-405b-8619-19318d2057d4-kube-api-access-pp5bw\") pod \"horizon-866fcfcb4f-gbvbw\" (UID: \"dd777ce0-fb21-405b-8619-19318d2057d4\") " pod="openstack/horizon-866fcfcb4f-gbvbw" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.594355 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-272xl\" (UniqueName: \"kubernetes.io/projected/9741f82c-1dce-4e95-9442-c424bcf8927d-kube-api-access-272xl\") pod \"barbican-db-sync-xggvg\" (UID: \"9741f82c-1dce-4e95-9442-c424bcf8927d\") " pod="openstack/barbican-db-sync-xggvg" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.595785 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dd777ce0-fb21-405b-8619-19318d2057d4-scripts\") pod \"horizon-866fcfcb4f-gbvbw\" (UID: \"dd777ce0-fb21-405b-8619-19318d2057d4\") " pod="openstack/horizon-866fcfcb4f-gbvbw" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.595865 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-scripts\") pod \"ceilometer-0\" (UID: \"45fe4f69-83fa-452c-a5fd-c993af1d0ef9\") " pod="openstack/ceilometer-0" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.595971 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9741f82c-1dce-4e95-9442-c424bcf8927d-db-sync-config-data\") pod \"barbican-db-sync-xggvg\" (UID: \"9741f82c-1dce-4e95-9442-c424bcf8927d\") " pod="openstack/barbican-db-sync-xggvg" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.596053 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"45fe4f69-83fa-452c-a5fd-c993af1d0ef9\") " pod="openstack/ceilometer-0" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.677922 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f02bee8-3f33-4fff-9d32-5bbd8463e842" path="/var/lib/kubelet/pods/5f02bee8-3f33-4fff-9d32-5bbd8463e842/volumes" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.679494 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-866fcfcb4f-gbvbw"] Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.688627 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-gzzkk"] Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.688700 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.696612 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-xzmx7"] Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.698806 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-xzmx7" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.703132 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-config-data\") pod \"ceilometer-0\" (UID: \"45fe4f69-83fa-452c-a5fd-c993af1d0ef9\") " pod="openstack/ceilometer-0" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.703213 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-log-httpd\") pod \"ceilometer-0\" (UID: \"45fe4f69-83fa-452c-a5fd-c993af1d0ef9\") " pod="openstack/ceilometer-0" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.703242 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"45fe4f69-83fa-452c-a5fd-c993af1d0ef9\") " pod="openstack/ceilometer-0" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.703275 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b40ac2f3-4c26-49e9-a194-7dfbcae15f46-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-xzmx7\" (UID: \"b40ac2f3-4c26-49e9-a194-7dfbcae15f46\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-xzmx7" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.703303 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dd777ce0-fb21-405b-8619-19318d2057d4-horizon-secret-key\") pod \"horizon-866fcfcb4f-gbvbw\" (UID: \"dd777ce0-fb21-405b-8619-19318d2057d4\") " pod="openstack/horizon-866fcfcb4f-gbvbw" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.703328 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j97vh\" (UniqueName: \"kubernetes.io/projected/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-kube-api-access-j97vh\") pod \"ceilometer-0\" (UID: \"45fe4f69-83fa-452c-a5fd-c993af1d0ef9\") " pod="openstack/ceilometer-0" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.703354 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp5bw\" (UniqueName: \"kubernetes.io/projected/dd777ce0-fb21-405b-8619-19318d2057d4-kube-api-access-pp5bw\") pod \"horizon-866fcfcb4f-gbvbw\" (UID: \"dd777ce0-fb21-405b-8619-19318d2057d4\") " pod="openstack/horizon-866fcfcb4f-gbvbw" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.703386 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-272xl\" (UniqueName: \"kubernetes.io/projected/9741f82c-1dce-4e95-9442-c424bcf8927d-kube-api-access-272xl\") pod \"barbican-db-sync-xggvg\" (UID: \"9741f82c-1dce-4e95-9442-c424bcf8927d\") " pod="openstack/barbican-db-sync-xggvg" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.703428 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dd777ce0-fb21-405b-8619-19318d2057d4-scripts\") pod \"horizon-866fcfcb4f-gbvbw\" (UID: \"dd777ce0-fb21-405b-8619-19318d2057d4\") " pod="openstack/horizon-866fcfcb4f-gbvbw" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.704237 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-scripts\") pod \"ceilometer-0\" (UID: \"45fe4f69-83fa-452c-a5fd-c993af1d0ef9\") " pod="openstack/ceilometer-0" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.704310 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvjn7\" (UniqueName: \"kubernetes.io/projected/b40ac2f3-4c26-49e9-a194-7dfbcae15f46-kube-api-access-wvjn7\") pod \"dnsmasq-dns-58dd9ff6bc-xzmx7\" (UID: \"b40ac2f3-4c26-49e9-a194-7dfbcae15f46\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-xzmx7" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.704343 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b40ac2f3-4c26-49e9-a194-7dfbcae15f46-config\") pod \"dnsmasq-dns-58dd9ff6bc-xzmx7\" (UID: \"b40ac2f3-4c26-49e9-a194-7dfbcae15f46\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-xzmx7" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.704365 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b40ac2f3-4c26-49e9-a194-7dfbcae15f46-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-xzmx7\" (UID: \"b40ac2f3-4c26-49e9-a194-7dfbcae15f46\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-xzmx7" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.704386 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b40ac2f3-4c26-49e9-a194-7dfbcae15f46-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-xzmx7\" (UID: \"b40ac2f3-4c26-49e9-a194-7dfbcae15f46\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-xzmx7" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.704425 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9741f82c-1dce-4e95-9442-c424bcf8927d-db-sync-config-data\") pod \"barbican-db-sync-xggvg\" (UID: \"9741f82c-1dce-4e95-9442-c424bcf8927d\") " pod="openstack/barbican-db-sync-xggvg" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.704470 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"45fe4f69-83fa-452c-a5fd-c993af1d0ef9\") " pod="openstack/ceilometer-0" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.704494 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b40ac2f3-4c26-49e9-a194-7dfbcae15f46-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-xzmx7\" (UID: \"b40ac2f3-4c26-49e9-a194-7dfbcae15f46\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-xzmx7" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.704526 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dd777ce0-fb21-405b-8619-19318d2057d4-config-data\") pod \"horizon-866fcfcb4f-gbvbw\" (UID: \"dd777ce0-fb21-405b-8619-19318d2057d4\") " pod="openstack/horizon-866fcfcb4f-gbvbw" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.704548 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9741f82c-1dce-4e95-9442-c424bcf8927d-combined-ca-bundle\") pod \"barbican-db-sync-xggvg\" (UID: \"9741f82c-1dce-4e95-9442-c424bcf8927d\") " pod="openstack/barbican-db-sync-xggvg" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.704597 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd777ce0-fb21-405b-8619-19318d2057d4-logs\") pod \"horizon-866fcfcb4f-gbvbw\" (UID: \"dd777ce0-fb21-405b-8619-19318d2057d4\") " pod="openstack/horizon-866fcfcb4f-gbvbw" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.704632 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-run-httpd\") pod \"ceilometer-0\" (UID: \"45fe4f69-83fa-452c-a5fd-c993af1d0ef9\") " pod="openstack/ceilometer-0" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.705865 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-run-httpd\") pod \"ceilometer-0\" (UID: \"45fe4f69-83fa-452c-a5fd-c993af1d0ef9\") " pod="openstack/ceilometer-0" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.709068 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-log-httpd\") pod \"ceilometer-0\" (UID: \"45fe4f69-83fa-452c-a5fd-c993af1d0ef9\") " pod="openstack/ceilometer-0" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.711396 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dd777ce0-fb21-405b-8619-19318d2057d4-config-data\") pod \"horizon-866fcfcb4f-gbvbw\" (UID: \"dd777ce0-fb21-405b-8619-19318d2057d4\") " pod="openstack/horizon-866fcfcb4f-gbvbw" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.711950 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd777ce0-fb21-405b-8619-19318d2057d4-logs\") pod \"horizon-866fcfcb4f-gbvbw\" (UID: \"dd777ce0-fb21-405b-8619-19318d2057d4\") " pod="openstack/horizon-866fcfcb4f-gbvbw" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.712125 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-xzmx7"] Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.712295 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dd777ce0-fb21-405b-8619-19318d2057d4-scripts\") pod \"horizon-866fcfcb4f-gbvbw\" (UID: \"dd777ce0-fb21-405b-8619-19318d2057d4\") " pod="openstack/horizon-866fcfcb4f-gbvbw" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.725740 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"45fe4f69-83fa-452c-a5fd-c993af1d0ef9\") " pod="openstack/ceilometer-0" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.726495 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9741f82c-1dce-4e95-9442-c424bcf8927d-combined-ca-bundle\") pod \"barbican-db-sync-xggvg\" (UID: \"9741f82c-1dce-4e95-9442-c424bcf8927d\") " pod="openstack/barbican-db-sync-xggvg" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.732929 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"45fe4f69-83fa-452c-a5fd-c993af1d0ef9\") " pod="openstack/ceilometer-0" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.735713 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9741f82c-1dce-4e95-9442-c424bcf8927d-db-sync-config-data\") pod \"barbican-db-sync-xggvg\" (UID: \"9741f82c-1dce-4e95-9442-c424bcf8927d\") " pod="openstack/barbican-db-sync-xggvg" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.736179 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp5bw\" (UniqueName: \"kubernetes.io/projected/dd777ce0-fb21-405b-8619-19318d2057d4-kube-api-access-pp5bw\") pod \"horizon-866fcfcb4f-gbvbw\" (UID: \"dd777ce0-fb21-405b-8619-19318d2057d4\") " pod="openstack/horizon-866fcfcb4f-gbvbw" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.747464 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j97vh\" (UniqueName: \"kubernetes.io/projected/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-kube-api-access-j97vh\") pod \"ceilometer-0\" (UID: \"45fe4f69-83fa-452c-a5fd-c993af1d0ef9\") " pod="openstack/ceilometer-0" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.747689 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dd777ce0-fb21-405b-8619-19318d2057d4-horizon-secret-key\") pod \"horizon-866fcfcb4f-gbvbw\" (UID: \"dd777ce0-fb21-405b-8619-19318d2057d4\") " pod="openstack/horizon-866fcfcb4f-gbvbw" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.748290 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-scripts\") pod \"ceilometer-0\" (UID: \"45fe4f69-83fa-452c-a5fd-c993af1d0ef9\") " pod="openstack/ceilometer-0" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.748875 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-272xl\" (UniqueName: \"kubernetes.io/projected/9741f82c-1dce-4e95-9442-c424bcf8927d-kube-api-access-272xl\") pod \"barbican-db-sync-xggvg\" (UID: \"9741f82c-1dce-4e95-9442-c424bcf8927d\") " pod="openstack/barbican-db-sync-xggvg" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.756691 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-config-data\") pod \"ceilometer-0\" (UID: \"45fe4f69-83fa-452c-a5fd-c993af1d0ef9\") " pod="openstack/ceilometer-0" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.808921 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b40ac2f3-4c26-49e9-a194-7dfbcae15f46-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-xzmx7\" (UID: \"b40ac2f3-4c26-49e9-a194-7dfbcae15f46\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-xzmx7" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.809384 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b40ac2f3-4c26-49e9-a194-7dfbcae15f46-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-xzmx7\" (UID: \"b40ac2f3-4c26-49e9-a194-7dfbcae15f46\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-xzmx7" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.809438 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvjn7\" (UniqueName: \"kubernetes.io/projected/b40ac2f3-4c26-49e9-a194-7dfbcae15f46-kube-api-access-wvjn7\") pod \"dnsmasq-dns-58dd9ff6bc-xzmx7\" (UID: \"b40ac2f3-4c26-49e9-a194-7dfbcae15f46\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-xzmx7" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.809489 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b40ac2f3-4c26-49e9-a194-7dfbcae15f46-config\") pod \"dnsmasq-dns-58dd9ff6bc-xzmx7\" (UID: \"b40ac2f3-4c26-49e9-a194-7dfbcae15f46\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-xzmx7" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.809516 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b40ac2f3-4c26-49e9-a194-7dfbcae15f46-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-xzmx7\" (UID: \"b40ac2f3-4c26-49e9-a194-7dfbcae15f46\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-xzmx7" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.809536 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b40ac2f3-4c26-49e9-a194-7dfbcae15f46-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-xzmx7\" (UID: \"b40ac2f3-4c26-49e9-a194-7dfbcae15f46\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-xzmx7" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.810702 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b40ac2f3-4c26-49e9-a194-7dfbcae15f46-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-xzmx7\" (UID: \"b40ac2f3-4c26-49e9-a194-7dfbcae15f46\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-xzmx7" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.811300 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b40ac2f3-4c26-49e9-a194-7dfbcae15f46-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-xzmx7\" (UID: \"b40ac2f3-4c26-49e9-a194-7dfbcae15f46\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-xzmx7" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.812199 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b40ac2f3-4c26-49e9-a194-7dfbcae15f46-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-xzmx7\" (UID: \"b40ac2f3-4c26-49e9-a194-7dfbcae15f46\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-xzmx7" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.814898 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b40ac2f3-4c26-49e9-a194-7dfbcae15f46-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-xzmx7\" (UID: \"b40ac2f3-4c26-49e9-a194-7dfbcae15f46\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-xzmx7" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.820440 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b40ac2f3-4c26-49e9-a194-7dfbcae15f46-config\") pod \"dnsmasq-dns-58dd9ff6bc-xzmx7\" (UID: \"b40ac2f3-4c26-49e9-a194-7dfbcae15f46\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-xzmx7" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.844427 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvjn7\" (UniqueName: \"kubernetes.io/projected/b40ac2f3-4c26-49e9-a194-7dfbcae15f46-kube-api-access-wvjn7\") pod \"dnsmasq-dns-58dd9ff6bc-xzmx7\" (UID: \"b40ac2f3-4c26-49e9-a194-7dfbcae15f46\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-xzmx7" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.879511 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xggvg" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.937107 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-866fcfcb4f-gbvbw" Feb 19 10:26:56 crc kubenswrapper[4811]: I0219 10:26:56.977521 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:26:57 crc kubenswrapper[4811]: W0219 10:26:57.025815 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3cc88aa_0785_4d47_88d3_ea590bbf33f4.slice/crio-ab83c858db20765d6a6b0f2e72e8253c62b594cd4f757b0e3fb1b312ef56e162 WatchSource:0}: Error finding container ab83c858db20765d6a6b0f2e72e8253c62b594cd4f757b0e3fb1b312ef56e162: Status 404 returned error can't find the container with id ab83c858db20765d6a6b0f2e72e8253c62b594cd4f757b0e3fb1b312ef56e162 Feb 19 10:26:57 crc kubenswrapper[4811]: I0219 10:26:57.027284 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-gzzkk"] Feb 19 10:26:57 crc kubenswrapper[4811]: I0219 10:26:57.086469 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-xzmx7" Feb 19 10:26:57 crc kubenswrapper[4811]: I0219 10:26:57.339851 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-758d556dd9-lgxtx"] Feb 19 10:26:57 crc kubenswrapper[4811]: I0219 10:26:57.361801 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-758d556dd9-lgxtx" event={"ID":"89f60543-0966-4b2a-b0d0-beb838040f9e","Type":"ContainerStarted","Data":"94a83c648a7993bed5db7a96fdbc44a9323e8526b421d755d5dfc357298923a7"} Feb 19 10:26:57 crc kubenswrapper[4811]: I0219 10:26:57.366870 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-gzzkk" event={"ID":"c3cc88aa-0785-4d47-88d3-ea590bbf33f4","Type":"ContainerStarted","Data":"ab83c858db20765d6a6b0f2e72e8253c62b594cd4f757b0e3fb1b312ef56e162"} Feb 19 10:26:57 crc kubenswrapper[4811]: I0219 10:26:57.450534 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-8skw7"] Feb 19 10:26:57 crc kubenswrapper[4811]: I0219 10:26:57.470015 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lgcp6"] Feb 19 10:26:57 crc kubenswrapper[4811]: W0219 10:26:57.474808 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bf7991a_1995_463e_8964_139036b7e6d5.slice/crio-78d9e527d6845d19eae0421987b36b9b1ad8869e6760e7597ccff0914cb3a289 WatchSource:0}: Error finding container 78d9e527d6845d19eae0421987b36b9b1ad8869e6760e7597ccff0914cb3a289: Status 404 returned error can't find the container with id 78d9e527d6845d19eae0421987b36b9b1ad8869e6760e7597ccff0914cb3a289 Feb 19 10:26:57 crc kubenswrapper[4811]: I0219 10:26:57.483578 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-rqgzq"] Feb 19 10:26:57 crc kubenswrapper[4811]: I0219 10:26:57.505952 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-928sb"] Feb 19 10:26:57 crc kubenswrapper[4811]: W0219 10:26:57.520793 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f3aa0de_ef5c_4148_97c5_3ef8ebcb1d7c.slice/crio-e4c4770c7c1a39a8fab0bd8a56313baaaca88ef1d05bb895b3f50fa2fa7d3b2c WatchSource:0}: Error finding container e4c4770c7c1a39a8fab0bd8a56313baaaca88ef1d05bb895b3f50fa2fa7d3b2c: Status 404 returned error can't find the container with id e4c4770c7c1a39a8fab0bd8a56313baaaca88ef1d05bb895b3f50fa2fa7d3b2c Feb 19 10:26:57 crc kubenswrapper[4811]: W0219 10:26:57.521345 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf091a50a_3df7_47c9_84a9_02c074138c7b.slice/crio-e82f858ed5cd4f38a8cf29808c7371efe8f1abd06ac01199c5157194b0171beb WatchSource:0}: Error finding container e82f858ed5cd4f38a8cf29808c7371efe8f1abd06ac01199c5157194b0171beb: Status 404 returned error can't find the container with id e82f858ed5cd4f38a8cf29808c7371efe8f1abd06ac01199c5157194b0171beb Feb 19 10:26:57 crc kubenswrapper[4811]: I0219 10:26:57.796679 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-xggvg"] Feb 19 10:26:57 crc kubenswrapper[4811]: I0219 10:26:57.832491 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-866fcfcb4f-gbvbw"] Feb 19 10:26:57 crc kubenswrapper[4811]: I0219 10:26:57.856695 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-xzmx7"] Feb 19 10:26:57 crc kubenswrapper[4811]: I0219 10:26:57.870230 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:26:58 crc kubenswrapper[4811]: E0219 10:26:58.304983 4811 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb40ac2f3_4c26_49e9_a194_7dfbcae15f46.slice/crio-conmon-2dc31e0f0b6dacbdd8b4c12d9cb52fde490da039fab74a2dc723f9c3521f39a7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb40ac2f3_4c26_49e9_a194_7dfbcae15f46.slice/crio-2dc31e0f0b6dacbdd8b4c12d9cb52fde490da039fab74a2dc723f9c3521f39a7.scope\": RecentStats: unable to find data in memory cache]" Feb 19 10:26:58 crc kubenswrapper[4811]: I0219 10:26:58.395047 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-928sb" event={"ID":"f091a50a-3df7-47c9-84a9-02c074138c7b","Type":"ContainerStarted","Data":"e82f858ed5cd4f38a8cf29808c7371efe8f1abd06ac01199c5157194b0171beb"} Feb 19 10:26:58 crc kubenswrapper[4811]: I0219 10:26:58.398297 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45fe4f69-83fa-452c-a5fd-c993af1d0ef9","Type":"ContainerStarted","Data":"2576d108b93f973486f8ed7c3a5aa81fddecc857c03c5ab1f88410f62f9d3e7c"} Feb 19 10:26:58 crc kubenswrapper[4811]: I0219 10:26:58.403456 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lgcp6" event={"ID":"9bf7991a-1995-463e-8964-139036b7e6d5","Type":"ContainerStarted","Data":"029c422170c2f579e58e656ba4a7e07a1023925b5dfdc84d59524d7be57d7ef9"} Feb 19 10:26:58 crc kubenswrapper[4811]: I0219 10:26:58.403502 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lgcp6" event={"ID":"9bf7991a-1995-463e-8964-139036b7e6d5","Type":"ContainerStarted","Data":"78d9e527d6845d19eae0421987b36b9b1ad8869e6760e7597ccff0914cb3a289"} Feb 19 10:26:58 crc kubenswrapper[4811]: I0219 10:26:58.409415 4811 generic.go:334] "Generic (PLEG): container finished" podID="b40ac2f3-4c26-49e9-a194-7dfbcae15f46" containerID="2dc31e0f0b6dacbdd8b4c12d9cb52fde490da039fab74a2dc723f9c3521f39a7" exitCode=0 Feb 19 10:26:58 crc kubenswrapper[4811]: I0219 10:26:58.409585 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-xzmx7" event={"ID":"b40ac2f3-4c26-49e9-a194-7dfbcae15f46","Type":"ContainerDied","Data":"2dc31e0f0b6dacbdd8b4c12d9cb52fde490da039fab74a2dc723f9c3521f39a7"} Feb 19 10:26:58 crc kubenswrapper[4811]: I0219 10:26:58.409640 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-xzmx7" event={"ID":"b40ac2f3-4c26-49e9-a194-7dfbcae15f46","Type":"ContainerStarted","Data":"e29d2e8263c761ea49be2d3f4da53e21dfbcc948cdb4a5f5d9573adafe08e76d"} Feb 19 10:26:58 crc kubenswrapper[4811]: I0219 10:26:58.415989 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xggvg" event={"ID":"9741f82c-1dce-4e95-9442-c424bcf8927d","Type":"ContainerStarted","Data":"d3222b0113e0f7bdc05a67c2fd673bfeccd711e27abe7ecbf017976e60dbeedc"} Feb 19 10:26:58 crc kubenswrapper[4811]: I0219 10:26:58.424426 4811 generic.go:334] "Generic (PLEG): container finished" podID="c3cc88aa-0785-4d47-88d3-ea590bbf33f4" containerID="07eeedc3d88d0d019219fcdf70716c6c5752297a7ce1fa9db9ad95f0f4542221" exitCode=0 Feb 19 10:26:58 crc kubenswrapper[4811]: I0219 10:26:58.424574 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-gzzkk" event={"ID":"c3cc88aa-0785-4d47-88d3-ea590bbf33f4","Type":"ContainerDied","Data":"07eeedc3d88d0d019219fcdf70716c6c5752297a7ce1fa9db9ad95f0f4542221"} Feb 19 10:26:58 crc kubenswrapper[4811]: I0219 10:26:58.425527 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-lgcp6" podStartSLOduration=3.425496561 podStartE2EDuration="3.425496561s" podCreationTimestamp="2026-02-19 10:26:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:26:58.424993949 +0000 UTC m=+1106.951458635" watchObservedRunningTime="2026-02-19 10:26:58.425496561 +0000 UTC m=+1106.951961247" Feb 19 10:26:58 crc kubenswrapper[4811]: I0219 10:26:58.429680 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-866fcfcb4f-gbvbw" event={"ID":"dd777ce0-fb21-405b-8619-19318d2057d4","Type":"ContainerStarted","Data":"b8018ca1cbd644b6b1dd9a80d5fcd970edbf2e73fe85467ef8694b6e80522fdd"} Feb 19 10:26:58 crc kubenswrapper[4811]: I0219 10:26:58.430804 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rqgzq" event={"ID":"d369ea05-6a36-44fd-bdd0-26eead9ba16d","Type":"ContainerStarted","Data":"4b34f24b5cac709a040725ce3c966f312348759f28121e3d635fc4d4b3a2190f"} Feb 19 10:26:58 crc kubenswrapper[4811]: I0219 10:26:58.432513 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-8skw7" event={"ID":"1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c","Type":"ContainerStarted","Data":"6802d851187ba13a812916ff101da5888a68bf72b1ff5782bef294df6f713955"} Feb 19 10:26:58 crc kubenswrapper[4811]: I0219 10:26:58.432538 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-8skw7" event={"ID":"1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c","Type":"ContainerStarted","Data":"e4c4770c7c1a39a8fab0bd8a56313baaaca88ef1d05bb895b3f50fa2fa7d3b2c"} Feb 19 10:26:58 crc kubenswrapper[4811]: I0219 10:26:58.580831 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-8skw7" podStartSLOduration=3.5808030090000003 podStartE2EDuration="3.580803009s" podCreationTimestamp="2026-02-19 10:26:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:26:58.54796949 +0000 UTC m=+1107.074434176" watchObservedRunningTime="2026-02-19 10:26:58.580803009 +0000 UTC m=+1107.107267695" Feb 19 10:26:58 crc kubenswrapper[4811]: I0219 10:26:58.848934 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-758d556dd9-lgxtx"] Feb 19 10:26:58 crc kubenswrapper[4811]: I0219 10:26:58.860482 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:26:58 crc kubenswrapper[4811]: I0219 10:26:58.870911 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-647dcdfd8f-skgg2"] Feb 19 10:26:58 crc kubenswrapper[4811]: I0219 10:26:58.873216 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-647dcdfd8f-skgg2" Feb 19 10:26:58 crc kubenswrapper[4811]: I0219 10:26:58.881210 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-647dcdfd8f-skgg2"] Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.078431 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f-config-data\") pod \"horizon-647dcdfd8f-skgg2\" (UID: \"a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f\") " pod="openstack/horizon-647dcdfd8f-skgg2" Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.079188 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f-horizon-secret-key\") pod \"horizon-647dcdfd8f-skgg2\" (UID: \"a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f\") " pod="openstack/horizon-647dcdfd8f-skgg2" Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.079256 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f-scripts\") pod \"horizon-647dcdfd8f-skgg2\" (UID: \"a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f\") " pod="openstack/horizon-647dcdfd8f-skgg2" Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.079299 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tp9d\" (UniqueName: \"kubernetes.io/projected/a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f-kube-api-access-6tp9d\") pod \"horizon-647dcdfd8f-skgg2\" (UID: \"a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f\") " pod="openstack/horizon-647dcdfd8f-skgg2" Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.079360 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f-logs\") pod \"horizon-647dcdfd8f-skgg2\" (UID: \"a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f\") " pod="openstack/horizon-647dcdfd8f-skgg2" Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.084112 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-gzzkk" Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.181069 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f-horizon-secret-key\") pod \"horizon-647dcdfd8f-skgg2\" (UID: \"a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f\") " pod="openstack/horizon-647dcdfd8f-skgg2" Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.181148 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f-scripts\") pod \"horizon-647dcdfd8f-skgg2\" (UID: \"a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f\") " pod="openstack/horizon-647dcdfd8f-skgg2" Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.181175 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tp9d\" (UniqueName: \"kubernetes.io/projected/a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f-kube-api-access-6tp9d\") pod \"horizon-647dcdfd8f-skgg2\" (UID: \"a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f\") " pod="openstack/horizon-647dcdfd8f-skgg2" Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.181220 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f-logs\") pod \"horizon-647dcdfd8f-skgg2\" (UID: \"a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f\") " pod="openstack/horizon-647dcdfd8f-skgg2" Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.181319 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f-config-data\") pod \"horizon-647dcdfd8f-skgg2\" (UID: \"a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f\") " pod="openstack/horizon-647dcdfd8f-skgg2" Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.182369 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f-logs\") pod \"horizon-647dcdfd8f-skgg2\" (UID: \"a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f\") " pod="openstack/horizon-647dcdfd8f-skgg2" Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.182810 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f-scripts\") pod \"horizon-647dcdfd8f-skgg2\" (UID: \"a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f\") " pod="openstack/horizon-647dcdfd8f-skgg2" Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.182994 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f-config-data\") pod \"horizon-647dcdfd8f-skgg2\" (UID: \"a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f\") " pod="openstack/horizon-647dcdfd8f-skgg2" Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.207154 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f-horizon-secret-key\") pod \"horizon-647dcdfd8f-skgg2\" (UID: \"a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f\") " pod="openstack/horizon-647dcdfd8f-skgg2" Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.247295 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tp9d\" (UniqueName: \"kubernetes.io/projected/a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f-kube-api-access-6tp9d\") pod \"horizon-647dcdfd8f-skgg2\" (UID: \"a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f\") " pod="openstack/horizon-647dcdfd8f-skgg2" Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.282818 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3cc88aa-0785-4d47-88d3-ea590bbf33f4-config\") pod \"c3cc88aa-0785-4d47-88d3-ea590bbf33f4\" (UID: \"c3cc88aa-0785-4d47-88d3-ea590bbf33f4\") " Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.282878 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8sb4\" (UniqueName: \"kubernetes.io/projected/c3cc88aa-0785-4d47-88d3-ea590bbf33f4-kube-api-access-l8sb4\") pod \"c3cc88aa-0785-4d47-88d3-ea590bbf33f4\" (UID: \"c3cc88aa-0785-4d47-88d3-ea590bbf33f4\") " Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.282969 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3cc88aa-0785-4d47-88d3-ea590bbf33f4-dns-swift-storage-0\") pod \"c3cc88aa-0785-4d47-88d3-ea590bbf33f4\" (UID: \"c3cc88aa-0785-4d47-88d3-ea590bbf33f4\") " Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.283020 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3cc88aa-0785-4d47-88d3-ea590bbf33f4-ovsdbserver-sb\") pod \"c3cc88aa-0785-4d47-88d3-ea590bbf33f4\" (UID: \"c3cc88aa-0785-4d47-88d3-ea590bbf33f4\") " Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.283041 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3cc88aa-0785-4d47-88d3-ea590bbf33f4-dns-svc\") pod \"c3cc88aa-0785-4d47-88d3-ea590bbf33f4\" (UID: \"c3cc88aa-0785-4d47-88d3-ea590bbf33f4\") " Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.283062 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3cc88aa-0785-4d47-88d3-ea590bbf33f4-ovsdbserver-nb\") pod \"c3cc88aa-0785-4d47-88d3-ea590bbf33f4\" (UID: \"c3cc88aa-0785-4d47-88d3-ea590bbf33f4\") " Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.323964 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3cc88aa-0785-4d47-88d3-ea590bbf33f4-kube-api-access-l8sb4" (OuterVolumeSpecName: "kube-api-access-l8sb4") pod "c3cc88aa-0785-4d47-88d3-ea590bbf33f4" (UID: "c3cc88aa-0785-4d47-88d3-ea590bbf33f4"). InnerVolumeSpecName "kube-api-access-l8sb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.337563 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3cc88aa-0785-4d47-88d3-ea590bbf33f4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c3cc88aa-0785-4d47-88d3-ea590bbf33f4" (UID: "c3cc88aa-0785-4d47-88d3-ea590bbf33f4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.368563 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3cc88aa-0785-4d47-88d3-ea590bbf33f4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c3cc88aa-0785-4d47-88d3-ea590bbf33f4" (UID: "c3cc88aa-0785-4d47-88d3-ea590bbf33f4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.388774 4811 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3cc88aa-0785-4d47-88d3-ea590bbf33f4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.388814 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3cc88aa-0785-4d47-88d3-ea590bbf33f4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.388825 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8sb4\" (UniqueName: \"kubernetes.io/projected/c3cc88aa-0785-4d47-88d3-ea590bbf33f4-kube-api-access-l8sb4\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.409949 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3cc88aa-0785-4d47-88d3-ea590bbf33f4-config" (OuterVolumeSpecName: "config") pod "c3cc88aa-0785-4d47-88d3-ea590bbf33f4" (UID: "c3cc88aa-0785-4d47-88d3-ea590bbf33f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.417207 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3cc88aa-0785-4d47-88d3-ea590bbf33f4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c3cc88aa-0785-4d47-88d3-ea590bbf33f4" (UID: "c3cc88aa-0785-4d47-88d3-ea590bbf33f4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.433362 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3cc88aa-0785-4d47-88d3-ea590bbf33f4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c3cc88aa-0785-4d47-88d3-ea590bbf33f4" (UID: "c3cc88aa-0785-4d47-88d3-ea590bbf33f4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.453406 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-xzmx7" event={"ID":"b40ac2f3-4c26-49e9-a194-7dfbcae15f46","Type":"ContainerStarted","Data":"a80f07aa77378e84aa79046cc486097ea8b11e32e8e95bffb4f9416bf026d77e"} Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.454565 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-xzmx7" Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.471502 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-gzzkk" Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.472402 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-gzzkk" event={"ID":"c3cc88aa-0785-4d47-88d3-ea590bbf33f4","Type":"ContainerDied","Data":"ab83c858db20765d6a6b0f2e72e8253c62b594cd4f757b0e3fb1b312ef56e162"} Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.472519 4811 scope.go:117] "RemoveContainer" containerID="07eeedc3d88d0d019219fcdf70716c6c5752297a7ce1fa9db9ad95f0f4542221" Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.493118 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-xzmx7" podStartSLOduration=3.493098462 podStartE2EDuration="3.493098462s" podCreationTimestamp="2026-02-19 10:26:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:26:59.493060451 +0000 UTC m=+1108.019525137" watchObservedRunningTime="2026-02-19 10:26:59.493098462 +0000 UTC m=+1108.019563148" Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.494657 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3cc88aa-0785-4d47-88d3-ea590bbf33f4-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.494686 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3cc88aa-0785-4d47-88d3-ea590bbf33f4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.494699 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3cc88aa-0785-4d47-88d3-ea590bbf33f4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.507383 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-647dcdfd8f-skgg2" Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.616136 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-gzzkk"] Feb 19 10:26:59 crc kubenswrapper[4811]: I0219 10:26:59.634423 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-gzzkk"] Feb 19 10:27:00 crc kubenswrapper[4811]: I0219 10:27:00.171913 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-647dcdfd8f-skgg2"] Feb 19 10:27:00 crc kubenswrapper[4811]: I0219 10:27:00.491616 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-647dcdfd8f-skgg2" event={"ID":"a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f","Type":"ContainerStarted","Data":"4dafc9b81d6782cf3f9b8df0039e4e8ad12e5ca3e9f5a4f7dd3f6cfb82ee4a21"} Feb 19 10:27:00 crc kubenswrapper[4811]: I0219 10:27:00.600846 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3cc88aa-0785-4d47-88d3-ea590bbf33f4" path="/var/lib/kubelet/pods/c3cc88aa-0785-4d47-88d3-ea590bbf33f4/volumes" Feb 19 10:27:01 crc kubenswrapper[4811]: I0219 10:27:01.503380 4811 generic.go:334] "Generic (PLEG): container finished" podID="a9133229-76e0-4848-b61e-f70c53216929" containerID="3ebe304eba92b1c15fdd3e06679060c5a4e36e380b5c3c66b49a6e27eb8abe44" exitCode=0 Feb 19 10:27:01 crc kubenswrapper[4811]: I0219 10:27:01.504242 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-z4vh6" event={"ID":"a9133229-76e0-4848-b61e-f70c53216929","Type":"ContainerDied","Data":"3ebe304eba92b1c15fdd3e06679060c5a4e36e380b5c3c66b49a6e27eb8abe44"} Feb 19 10:27:03 crc kubenswrapper[4811]: I0219 10:27:03.584733 4811 generic.go:334] "Generic (PLEG): container finished" podID="9bf7991a-1995-463e-8964-139036b7e6d5" containerID="029c422170c2f579e58e656ba4a7e07a1023925b5dfdc84d59524d7be57d7ef9" exitCode=0 Feb 19 10:27:03 crc kubenswrapper[4811]: I0219 10:27:03.584821 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lgcp6" event={"ID":"9bf7991a-1995-463e-8964-139036b7e6d5","Type":"ContainerDied","Data":"029c422170c2f579e58e656ba4a7e07a1023925b5dfdc84d59524d7be57d7ef9"} Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.013313 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-866fcfcb4f-gbvbw"] Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.076237 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-766dfcbd56-lqbrt"] Feb 19 10:27:05 crc kubenswrapper[4811]: E0219 10:27:05.076636 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3cc88aa-0785-4d47-88d3-ea590bbf33f4" containerName="init" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.076650 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3cc88aa-0785-4d47-88d3-ea590bbf33f4" containerName="init" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.076825 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3cc88aa-0785-4d47-88d3-ea590bbf33f4" containerName="init" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.077675 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-766dfcbd56-lqbrt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.085207 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.098231 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-766dfcbd56-lqbrt"] Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.148358 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-647dcdfd8f-skgg2"] Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.151138 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-config-data\") pod \"horizon-766dfcbd56-lqbrt\" (UID: \"4ab45e9b-6cfc-48a3-b603-f47d47bbd510\") " pod="openstack/horizon-766dfcbd56-lqbrt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.151608 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-logs\") pod \"horizon-766dfcbd56-lqbrt\" (UID: \"4ab45e9b-6cfc-48a3-b603-f47d47bbd510\") " pod="openstack/horizon-766dfcbd56-lqbrt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.151851 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-horizon-secret-key\") pod \"horizon-766dfcbd56-lqbrt\" (UID: \"4ab45e9b-6cfc-48a3-b603-f47d47bbd510\") " pod="openstack/horizon-766dfcbd56-lqbrt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.152154 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55wxc\" (UniqueName: \"kubernetes.io/projected/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-kube-api-access-55wxc\") pod \"horizon-766dfcbd56-lqbrt\" (UID: \"4ab45e9b-6cfc-48a3-b603-f47d47bbd510\") " pod="openstack/horizon-766dfcbd56-lqbrt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.152316 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-combined-ca-bundle\") pod \"horizon-766dfcbd56-lqbrt\" (UID: \"4ab45e9b-6cfc-48a3-b603-f47d47bbd510\") " pod="openstack/horizon-766dfcbd56-lqbrt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.152735 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-scripts\") pod \"horizon-766dfcbd56-lqbrt\" (UID: \"4ab45e9b-6cfc-48a3-b603-f47d47bbd510\") " pod="openstack/horizon-766dfcbd56-lqbrt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.152853 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-horizon-tls-certs\") pod \"horizon-766dfcbd56-lqbrt\" (UID: \"4ab45e9b-6cfc-48a3-b603-f47d47bbd510\") " pod="openstack/horizon-766dfcbd56-lqbrt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.226700 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-676dcb6496-tnhmt"] Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.236726 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-676dcb6496-tnhmt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.256476 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-combined-ca-bundle\") pod \"horizon-766dfcbd56-lqbrt\" (UID: \"4ab45e9b-6cfc-48a3-b603-f47d47bbd510\") " pod="openstack/horizon-766dfcbd56-lqbrt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.256583 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-scripts\") pod \"horizon-766dfcbd56-lqbrt\" (UID: \"4ab45e9b-6cfc-48a3-b603-f47d47bbd510\") " pod="openstack/horizon-766dfcbd56-lqbrt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.256618 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-horizon-tls-certs\") pod \"horizon-766dfcbd56-lqbrt\" (UID: \"4ab45e9b-6cfc-48a3-b603-f47d47bbd510\") " pod="openstack/horizon-766dfcbd56-lqbrt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.256688 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-config-data\") pod \"horizon-766dfcbd56-lqbrt\" (UID: \"4ab45e9b-6cfc-48a3-b603-f47d47bbd510\") " pod="openstack/horizon-766dfcbd56-lqbrt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.256737 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-logs\") pod \"horizon-766dfcbd56-lqbrt\" (UID: \"4ab45e9b-6cfc-48a3-b603-f47d47bbd510\") " pod="openstack/horizon-766dfcbd56-lqbrt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.256797 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-horizon-secret-key\") pod \"horizon-766dfcbd56-lqbrt\" (UID: \"4ab45e9b-6cfc-48a3-b603-f47d47bbd510\") " pod="openstack/horizon-766dfcbd56-lqbrt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.256888 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55wxc\" (UniqueName: \"kubernetes.io/projected/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-kube-api-access-55wxc\") pod \"horizon-766dfcbd56-lqbrt\" (UID: \"4ab45e9b-6cfc-48a3-b603-f47d47bbd510\") " pod="openstack/horizon-766dfcbd56-lqbrt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.264868 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-config-data\") pod \"horizon-766dfcbd56-lqbrt\" (UID: \"4ab45e9b-6cfc-48a3-b603-f47d47bbd510\") " pod="openstack/horizon-766dfcbd56-lqbrt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.270724 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-676dcb6496-tnhmt"] Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.271126 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-logs\") pod \"horizon-766dfcbd56-lqbrt\" (UID: \"4ab45e9b-6cfc-48a3-b603-f47d47bbd510\") " pod="openstack/horizon-766dfcbd56-lqbrt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.273745 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-scripts\") pod \"horizon-766dfcbd56-lqbrt\" (UID: \"4ab45e9b-6cfc-48a3-b603-f47d47bbd510\") " pod="openstack/horizon-766dfcbd56-lqbrt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.351120 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-horizon-tls-certs\") pod \"horizon-766dfcbd56-lqbrt\" (UID: \"4ab45e9b-6cfc-48a3-b603-f47d47bbd510\") " pod="openstack/horizon-766dfcbd56-lqbrt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.351287 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-combined-ca-bundle\") pod \"horizon-766dfcbd56-lqbrt\" (UID: \"4ab45e9b-6cfc-48a3-b603-f47d47bbd510\") " pod="openstack/horizon-766dfcbd56-lqbrt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.353985 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-horizon-secret-key\") pod \"horizon-766dfcbd56-lqbrt\" (UID: \"4ab45e9b-6cfc-48a3-b603-f47d47bbd510\") " pod="openstack/horizon-766dfcbd56-lqbrt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.360494 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2whr\" (UniqueName: \"kubernetes.io/projected/19d17501-8d5f-4f9d-8fad-94f48224e910-kube-api-access-b2whr\") pod \"horizon-676dcb6496-tnhmt\" (UID: \"19d17501-8d5f-4f9d-8fad-94f48224e910\") " pod="openstack/horizon-676dcb6496-tnhmt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.361019 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19d17501-8d5f-4f9d-8fad-94f48224e910-logs\") pod \"horizon-676dcb6496-tnhmt\" (UID: \"19d17501-8d5f-4f9d-8fad-94f48224e910\") " pod="openstack/horizon-676dcb6496-tnhmt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.361115 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/19d17501-8d5f-4f9d-8fad-94f48224e910-horizon-tls-certs\") pod \"horizon-676dcb6496-tnhmt\" (UID: \"19d17501-8d5f-4f9d-8fad-94f48224e910\") " pod="openstack/horizon-676dcb6496-tnhmt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.361175 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/19d17501-8d5f-4f9d-8fad-94f48224e910-horizon-secret-key\") pod \"horizon-676dcb6496-tnhmt\" (UID: \"19d17501-8d5f-4f9d-8fad-94f48224e910\") " pod="openstack/horizon-676dcb6496-tnhmt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.361248 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19d17501-8d5f-4f9d-8fad-94f48224e910-combined-ca-bundle\") pod \"horizon-676dcb6496-tnhmt\" (UID: \"19d17501-8d5f-4f9d-8fad-94f48224e910\") " pod="openstack/horizon-676dcb6496-tnhmt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.361323 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19d17501-8d5f-4f9d-8fad-94f48224e910-config-data\") pod \"horizon-676dcb6496-tnhmt\" (UID: \"19d17501-8d5f-4f9d-8fad-94f48224e910\") " pod="openstack/horizon-676dcb6496-tnhmt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.364743 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19d17501-8d5f-4f9d-8fad-94f48224e910-scripts\") pod \"horizon-676dcb6496-tnhmt\" (UID: \"19d17501-8d5f-4f9d-8fad-94f48224e910\") " pod="openstack/horizon-676dcb6496-tnhmt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.376138 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55wxc\" (UniqueName: \"kubernetes.io/projected/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-kube-api-access-55wxc\") pod \"horizon-766dfcbd56-lqbrt\" (UID: \"4ab45e9b-6cfc-48a3-b603-f47d47bbd510\") " pod="openstack/horizon-766dfcbd56-lqbrt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.407903 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-766dfcbd56-lqbrt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.466697 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2whr\" (UniqueName: \"kubernetes.io/projected/19d17501-8d5f-4f9d-8fad-94f48224e910-kube-api-access-b2whr\") pod \"horizon-676dcb6496-tnhmt\" (UID: \"19d17501-8d5f-4f9d-8fad-94f48224e910\") " pod="openstack/horizon-676dcb6496-tnhmt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.466800 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19d17501-8d5f-4f9d-8fad-94f48224e910-logs\") pod \"horizon-676dcb6496-tnhmt\" (UID: \"19d17501-8d5f-4f9d-8fad-94f48224e910\") " pod="openstack/horizon-676dcb6496-tnhmt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.466896 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/19d17501-8d5f-4f9d-8fad-94f48224e910-horizon-tls-certs\") pod \"horizon-676dcb6496-tnhmt\" (UID: \"19d17501-8d5f-4f9d-8fad-94f48224e910\") " pod="openstack/horizon-676dcb6496-tnhmt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.466974 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/19d17501-8d5f-4f9d-8fad-94f48224e910-horizon-secret-key\") pod \"horizon-676dcb6496-tnhmt\" (UID: \"19d17501-8d5f-4f9d-8fad-94f48224e910\") " pod="openstack/horizon-676dcb6496-tnhmt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.467057 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19d17501-8d5f-4f9d-8fad-94f48224e910-combined-ca-bundle\") pod \"horizon-676dcb6496-tnhmt\" (UID: \"19d17501-8d5f-4f9d-8fad-94f48224e910\") " pod="openstack/horizon-676dcb6496-tnhmt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.467142 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19d17501-8d5f-4f9d-8fad-94f48224e910-config-data\") pod \"horizon-676dcb6496-tnhmt\" (UID: \"19d17501-8d5f-4f9d-8fad-94f48224e910\") " pod="openstack/horizon-676dcb6496-tnhmt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.467209 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19d17501-8d5f-4f9d-8fad-94f48224e910-scripts\") pod \"horizon-676dcb6496-tnhmt\" (UID: \"19d17501-8d5f-4f9d-8fad-94f48224e910\") " pod="openstack/horizon-676dcb6496-tnhmt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.468130 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19d17501-8d5f-4f9d-8fad-94f48224e910-scripts\") pod \"horizon-676dcb6496-tnhmt\" (UID: \"19d17501-8d5f-4f9d-8fad-94f48224e910\") " pod="openstack/horizon-676dcb6496-tnhmt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.468791 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19d17501-8d5f-4f9d-8fad-94f48224e910-logs\") pod \"horizon-676dcb6496-tnhmt\" (UID: \"19d17501-8d5f-4f9d-8fad-94f48224e910\") " pod="openstack/horizon-676dcb6496-tnhmt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.475371 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19d17501-8d5f-4f9d-8fad-94f48224e910-config-data\") pod \"horizon-676dcb6496-tnhmt\" (UID: \"19d17501-8d5f-4f9d-8fad-94f48224e910\") " pod="openstack/horizon-676dcb6496-tnhmt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.480097 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/19d17501-8d5f-4f9d-8fad-94f48224e910-horizon-secret-key\") pod \"horizon-676dcb6496-tnhmt\" (UID: \"19d17501-8d5f-4f9d-8fad-94f48224e910\") " pod="openstack/horizon-676dcb6496-tnhmt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.480296 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/19d17501-8d5f-4f9d-8fad-94f48224e910-horizon-tls-certs\") pod \"horizon-676dcb6496-tnhmt\" (UID: \"19d17501-8d5f-4f9d-8fad-94f48224e910\") " pod="openstack/horizon-676dcb6496-tnhmt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.482216 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19d17501-8d5f-4f9d-8fad-94f48224e910-combined-ca-bundle\") pod \"horizon-676dcb6496-tnhmt\" (UID: \"19d17501-8d5f-4f9d-8fad-94f48224e910\") " pod="openstack/horizon-676dcb6496-tnhmt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.502236 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2whr\" (UniqueName: \"kubernetes.io/projected/19d17501-8d5f-4f9d-8fad-94f48224e910-kube-api-access-b2whr\") pod \"horizon-676dcb6496-tnhmt\" (UID: \"19d17501-8d5f-4f9d-8fad-94f48224e910\") " pod="openstack/horizon-676dcb6496-tnhmt" Feb 19 10:27:05 crc kubenswrapper[4811]: I0219 10:27:05.575039 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-676dcb6496-tnhmt" Feb 19 10:27:07 crc kubenswrapper[4811]: I0219 10:27:07.088634 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58dd9ff6bc-xzmx7" Feb 19 10:27:07 crc kubenswrapper[4811]: I0219 10:27:07.164176 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-lwl9w"] Feb 19 10:27:07 crc kubenswrapper[4811]: I0219 10:27:07.164495 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-lwl9w" podUID="15311a54-a0df-4c3d-b58f-384839b49b34" containerName="dnsmasq-dns" containerID="cri-o://0c42924b955521195924491430f123cf78cd53674446d731e511f62fd5c6350f" gracePeriod=10 Feb 19 10:27:07 crc kubenswrapper[4811]: I0219 10:27:07.644329 4811 generic.go:334] "Generic (PLEG): container finished" podID="15311a54-a0df-4c3d-b58f-384839b49b34" containerID="0c42924b955521195924491430f123cf78cd53674446d731e511f62fd5c6350f" exitCode=0 Feb 19 10:27:07 crc kubenswrapper[4811]: I0219 10:27:07.644719 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-lwl9w" event={"ID":"15311a54-a0df-4c3d-b58f-384839b49b34","Type":"ContainerDied","Data":"0c42924b955521195924491430f123cf78cd53674446d731e511f62fd5c6350f"} Feb 19 10:27:08 crc kubenswrapper[4811]: I0219 10:27:08.995594 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-lwl9w" podUID="15311a54-a0df-4c3d-b58f-384839b49b34" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.132:5353: connect: connection refused" Feb 19 10:27:13 crc kubenswrapper[4811]: I0219 10:27:13.989761 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-lwl9w" podUID="15311a54-a0df-4c3d-b58f-384839b49b34" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.132:5353: connect: connection refused" Feb 19 10:27:14 crc kubenswrapper[4811]: I0219 10:27:14.177505 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-z4vh6" Feb 19 10:27:14 crc kubenswrapper[4811]: I0219 10:27:14.195907 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lgcp6" Feb 19 10:27:14 crc kubenswrapper[4811]: I0219 10:27:14.274591 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bf7991a-1995-463e-8964-139036b7e6d5-config-data\") pod \"9bf7991a-1995-463e-8964-139036b7e6d5\" (UID: \"9bf7991a-1995-463e-8964-139036b7e6d5\") " Feb 19 10:27:14 crc kubenswrapper[4811]: I0219 10:27:14.274700 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bf7991a-1995-463e-8964-139036b7e6d5-combined-ca-bundle\") pod \"9bf7991a-1995-463e-8964-139036b7e6d5\" (UID: \"9bf7991a-1995-463e-8964-139036b7e6d5\") " Feb 19 10:27:14 crc kubenswrapper[4811]: I0219 10:27:14.274753 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9133229-76e0-4848-b61e-f70c53216929-combined-ca-bundle\") pod \"a9133229-76e0-4848-b61e-f70c53216929\" (UID: \"a9133229-76e0-4848-b61e-f70c53216929\") " Feb 19 10:27:14 crc kubenswrapper[4811]: I0219 10:27:14.274857 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9bf7991a-1995-463e-8964-139036b7e6d5-credential-keys\") pod \"9bf7991a-1995-463e-8964-139036b7e6d5\" (UID: \"9bf7991a-1995-463e-8964-139036b7e6d5\") " Feb 19 10:27:14 crc kubenswrapper[4811]: I0219 10:27:14.275632 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9bf7991a-1995-463e-8964-139036b7e6d5-scripts\") pod \"9bf7991a-1995-463e-8964-139036b7e6d5\" (UID: \"9bf7991a-1995-463e-8964-139036b7e6d5\") " Feb 19 10:27:14 crc kubenswrapper[4811]: I0219 10:27:14.275723 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9133229-76e0-4848-b61e-f70c53216929-config-data\") pod \"a9133229-76e0-4848-b61e-f70c53216929\" (UID: \"a9133229-76e0-4848-b61e-f70c53216929\") " Feb 19 10:27:14 crc kubenswrapper[4811]: I0219 10:27:14.275774 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrdgq\" (UniqueName: \"kubernetes.io/projected/9bf7991a-1995-463e-8964-139036b7e6d5-kube-api-access-zrdgq\") pod \"9bf7991a-1995-463e-8964-139036b7e6d5\" (UID: \"9bf7991a-1995-463e-8964-139036b7e6d5\") " Feb 19 10:27:14 crc kubenswrapper[4811]: I0219 10:27:14.275867 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a9133229-76e0-4848-b61e-f70c53216929-db-sync-config-data\") pod \"a9133229-76e0-4848-b61e-f70c53216929\" (UID: \"a9133229-76e0-4848-b61e-f70c53216929\") " Feb 19 10:27:14 crc kubenswrapper[4811]: I0219 10:27:14.275907 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mmnn\" (UniqueName: \"kubernetes.io/projected/a9133229-76e0-4848-b61e-f70c53216929-kube-api-access-7mmnn\") pod \"a9133229-76e0-4848-b61e-f70c53216929\" (UID: \"a9133229-76e0-4848-b61e-f70c53216929\") " Feb 19 10:27:14 crc kubenswrapper[4811]: I0219 10:27:14.275995 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9bf7991a-1995-463e-8964-139036b7e6d5-fernet-keys\") pod \"9bf7991a-1995-463e-8964-139036b7e6d5\" (UID: \"9bf7991a-1995-463e-8964-139036b7e6d5\") " Feb 19 10:27:14 crc kubenswrapper[4811]: I0219 10:27:14.281870 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bf7991a-1995-463e-8964-139036b7e6d5-scripts" (OuterVolumeSpecName: "scripts") pod "9bf7991a-1995-463e-8964-139036b7e6d5" (UID: "9bf7991a-1995-463e-8964-139036b7e6d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:14 crc kubenswrapper[4811]: I0219 10:27:14.281943 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bf7991a-1995-463e-8964-139036b7e6d5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9bf7991a-1995-463e-8964-139036b7e6d5" (UID: "9bf7991a-1995-463e-8964-139036b7e6d5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:14 crc kubenswrapper[4811]: I0219 10:27:14.282212 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bf7991a-1995-463e-8964-139036b7e6d5-kube-api-access-zrdgq" (OuterVolumeSpecName: "kube-api-access-zrdgq") pod "9bf7991a-1995-463e-8964-139036b7e6d5" (UID: "9bf7991a-1995-463e-8964-139036b7e6d5"). InnerVolumeSpecName "kube-api-access-zrdgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:27:14 crc kubenswrapper[4811]: I0219 10:27:14.283089 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9133229-76e0-4848-b61e-f70c53216929-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a9133229-76e0-4848-b61e-f70c53216929" (UID: "a9133229-76e0-4848-b61e-f70c53216929"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:14 crc kubenswrapper[4811]: I0219 10:27:14.285793 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bf7991a-1995-463e-8964-139036b7e6d5-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "9bf7991a-1995-463e-8964-139036b7e6d5" (UID: "9bf7991a-1995-463e-8964-139036b7e6d5"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:14 crc kubenswrapper[4811]: I0219 10:27:14.297887 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9133229-76e0-4848-b61e-f70c53216929-kube-api-access-7mmnn" (OuterVolumeSpecName: "kube-api-access-7mmnn") pod "a9133229-76e0-4848-b61e-f70c53216929" (UID: "a9133229-76e0-4848-b61e-f70c53216929"). InnerVolumeSpecName "kube-api-access-7mmnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:27:14 crc kubenswrapper[4811]: I0219 10:27:14.313708 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9133229-76e0-4848-b61e-f70c53216929-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a9133229-76e0-4848-b61e-f70c53216929" (UID: "a9133229-76e0-4848-b61e-f70c53216929"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:14 crc kubenswrapper[4811]: I0219 10:27:14.316551 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bf7991a-1995-463e-8964-139036b7e6d5-config-data" (OuterVolumeSpecName: "config-data") pod "9bf7991a-1995-463e-8964-139036b7e6d5" (UID: "9bf7991a-1995-463e-8964-139036b7e6d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:14 crc kubenswrapper[4811]: I0219 10:27:14.326555 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bf7991a-1995-463e-8964-139036b7e6d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9bf7991a-1995-463e-8964-139036b7e6d5" (UID: "9bf7991a-1995-463e-8964-139036b7e6d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:14 crc kubenswrapper[4811]: I0219 10:27:14.356113 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9133229-76e0-4848-b61e-f70c53216929-config-data" (OuterVolumeSpecName: "config-data") pod "a9133229-76e0-4848-b61e-f70c53216929" (UID: "a9133229-76e0-4848-b61e-f70c53216929"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:14 crc kubenswrapper[4811]: I0219 10:27:14.378639 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9bf7991a-1995-463e-8964-139036b7e6d5-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:14 crc kubenswrapper[4811]: I0219 10:27:14.378673 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9133229-76e0-4848-b61e-f70c53216929-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:14 crc kubenswrapper[4811]: I0219 10:27:14.378685 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrdgq\" (UniqueName: \"kubernetes.io/projected/9bf7991a-1995-463e-8964-139036b7e6d5-kube-api-access-zrdgq\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:14 crc kubenswrapper[4811]: I0219 10:27:14.378706 4811 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a9133229-76e0-4848-b61e-f70c53216929-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:14 crc kubenswrapper[4811]: I0219 10:27:14.378718 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mmnn\" (UniqueName: \"kubernetes.io/projected/a9133229-76e0-4848-b61e-f70c53216929-kube-api-access-7mmnn\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:14 crc kubenswrapper[4811]: I0219 10:27:14.378726 4811 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9bf7991a-1995-463e-8964-139036b7e6d5-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:14 crc kubenswrapper[4811]: I0219 10:27:14.378736 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bf7991a-1995-463e-8964-139036b7e6d5-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:14 crc kubenswrapper[4811]: I0219 10:27:14.378746 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bf7991a-1995-463e-8964-139036b7e6d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:14 crc kubenswrapper[4811]: I0219 10:27:14.378754 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9133229-76e0-4848-b61e-f70c53216929-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:14 crc kubenswrapper[4811]: I0219 10:27:14.378763 4811 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9bf7991a-1995-463e-8964-139036b7e6d5-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:14 crc kubenswrapper[4811]: I0219 10:27:14.722052 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lgcp6" event={"ID":"9bf7991a-1995-463e-8964-139036b7e6d5","Type":"ContainerDied","Data":"78d9e527d6845d19eae0421987b36b9b1ad8869e6760e7597ccff0914cb3a289"} Feb 19 10:27:14 crc kubenswrapper[4811]: I0219 10:27:14.722090 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78d9e527d6845d19eae0421987b36b9b1ad8869e6760e7597ccff0914cb3a289" Feb 19 10:27:14 crc kubenswrapper[4811]: I0219 10:27:14.722158 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lgcp6" Feb 19 10:27:14 crc kubenswrapper[4811]: I0219 10:27:14.726027 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-z4vh6" event={"ID":"a9133229-76e0-4848-b61e-f70c53216929","Type":"ContainerDied","Data":"9a4cf576dba88b3d5b3a2030533dae79ca5226d41ccb3a220006af6dfcde099e"} Feb 19 10:27:14 crc kubenswrapper[4811]: I0219 10:27:14.726079 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a4cf576dba88b3d5b3a2030533dae79ca5226d41ccb3a220006af6dfcde099e" Feb 19 10:27:14 crc kubenswrapper[4811]: I0219 10:27:14.726122 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-z4vh6" Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.385859 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-lgcp6"] Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.392491 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-lgcp6"] Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.489075 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-kdq88"] Feb 19 10:27:15 crc kubenswrapper[4811]: E0219 10:27:15.489586 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9133229-76e0-4848-b61e-f70c53216929" containerName="glance-db-sync" Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.489611 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9133229-76e0-4848-b61e-f70c53216929" containerName="glance-db-sync" Feb 19 10:27:15 crc kubenswrapper[4811]: E0219 10:27:15.489626 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bf7991a-1995-463e-8964-139036b7e6d5" containerName="keystone-bootstrap" Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.489635 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bf7991a-1995-463e-8964-139036b7e6d5" containerName="keystone-bootstrap" Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.489844 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9133229-76e0-4848-b61e-f70c53216929" containerName="glance-db-sync" Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.489868 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bf7991a-1995-463e-8964-139036b7e6d5" containerName="keystone-bootstrap" Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.490559 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kdq88" Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.492879 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.493325 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.493681 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-znphl" Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.493875 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.494263 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.503982 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-kdq88"] Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.614931 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/81ad234d-f37a-45e3-9b76-c45aabd896ae-credential-keys\") pod \"keystone-bootstrap-kdq88\" (UID: \"81ad234d-f37a-45e3-9b76-c45aabd896ae\") " pod="openstack/keystone-bootstrap-kdq88" Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.615093 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ad234d-f37a-45e3-9b76-c45aabd896ae-config-data\") pod \"keystone-bootstrap-kdq88\" (UID: \"81ad234d-f37a-45e3-9b76-c45aabd896ae\") " pod="openstack/keystone-bootstrap-kdq88" Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.615130 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ad234d-f37a-45e3-9b76-c45aabd896ae-combined-ca-bundle\") pod \"keystone-bootstrap-kdq88\" (UID: \"81ad234d-f37a-45e3-9b76-c45aabd896ae\") " pod="openstack/keystone-bootstrap-kdq88" Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.615169 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/81ad234d-f37a-45e3-9b76-c45aabd896ae-fernet-keys\") pod \"keystone-bootstrap-kdq88\" (UID: \"81ad234d-f37a-45e3-9b76-c45aabd896ae\") " pod="openstack/keystone-bootstrap-kdq88" Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.615300 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81ad234d-f37a-45e3-9b76-c45aabd896ae-scripts\") pod \"keystone-bootstrap-kdq88\" (UID: \"81ad234d-f37a-45e3-9b76-c45aabd896ae\") " pod="openstack/keystone-bootstrap-kdq88" Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.615329 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c6g5\" (UniqueName: \"kubernetes.io/projected/81ad234d-f37a-45e3-9b76-c45aabd896ae-kube-api-access-8c6g5\") pod \"keystone-bootstrap-kdq88\" (UID: \"81ad234d-f37a-45e3-9b76-c45aabd896ae\") " pod="openstack/keystone-bootstrap-kdq88" Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.717337 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81ad234d-f37a-45e3-9b76-c45aabd896ae-scripts\") pod \"keystone-bootstrap-kdq88\" (UID: \"81ad234d-f37a-45e3-9b76-c45aabd896ae\") " pod="openstack/keystone-bootstrap-kdq88" Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.717406 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c6g5\" (UniqueName: \"kubernetes.io/projected/81ad234d-f37a-45e3-9b76-c45aabd896ae-kube-api-access-8c6g5\") pod \"keystone-bootstrap-kdq88\" (UID: \"81ad234d-f37a-45e3-9b76-c45aabd896ae\") " pod="openstack/keystone-bootstrap-kdq88" Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.717626 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/81ad234d-f37a-45e3-9b76-c45aabd896ae-credential-keys\") pod \"keystone-bootstrap-kdq88\" (UID: \"81ad234d-f37a-45e3-9b76-c45aabd896ae\") " pod="openstack/keystone-bootstrap-kdq88" Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.717783 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ad234d-f37a-45e3-9b76-c45aabd896ae-config-data\") pod \"keystone-bootstrap-kdq88\" (UID: \"81ad234d-f37a-45e3-9b76-c45aabd896ae\") " pod="openstack/keystone-bootstrap-kdq88" Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.717818 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ad234d-f37a-45e3-9b76-c45aabd896ae-combined-ca-bundle\") pod \"keystone-bootstrap-kdq88\" (UID: \"81ad234d-f37a-45e3-9b76-c45aabd896ae\") " pod="openstack/keystone-bootstrap-kdq88" Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.717883 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/81ad234d-f37a-45e3-9b76-c45aabd896ae-fernet-keys\") pod \"keystone-bootstrap-kdq88\" (UID: \"81ad234d-f37a-45e3-9b76-c45aabd896ae\") " pod="openstack/keystone-bootstrap-kdq88" Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.735284 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ad234d-f37a-45e3-9b76-c45aabd896ae-combined-ca-bundle\") pod \"keystone-bootstrap-kdq88\" (UID: \"81ad234d-f37a-45e3-9b76-c45aabd896ae\") " pod="openstack/keystone-bootstrap-kdq88" Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.735717 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/81ad234d-f37a-45e3-9b76-c45aabd896ae-credential-keys\") pod \"keystone-bootstrap-kdq88\" (UID: \"81ad234d-f37a-45e3-9b76-c45aabd896ae\") " pod="openstack/keystone-bootstrap-kdq88" Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.738209 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81ad234d-f37a-45e3-9b76-c45aabd896ae-scripts\") pod \"keystone-bootstrap-kdq88\" (UID: \"81ad234d-f37a-45e3-9b76-c45aabd896ae\") " pod="openstack/keystone-bootstrap-kdq88" Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.749216 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c6g5\" (UniqueName: \"kubernetes.io/projected/81ad234d-f37a-45e3-9b76-c45aabd896ae-kube-api-access-8c6g5\") pod \"keystone-bootstrap-kdq88\" (UID: \"81ad234d-f37a-45e3-9b76-c45aabd896ae\") " pod="openstack/keystone-bootstrap-kdq88" Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.752578 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-pd4kq"] Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.754367 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/81ad234d-f37a-45e3-9b76-c45aabd896ae-fernet-keys\") pod \"keystone-bootstrap-kdq88\" (UID: \"81ad234d-f37a-45e3-9b76-c45aabd896ae\") " pod="openstack/keystone-bootstrap-kdq88" Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.754492 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-pd4kq" Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.759034 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ad234d-f37a-45e3-9b76-c45aabd896ae-config-data\") pod \"keystone-bootstrap-kdq88\" (UID: \"81ad234d-f37a-45e3-9b76-c45aabd896ae\") " pod="openstack/keystone-bootstrap-kdq88" Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.792277 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-pd4kq"] Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.823317 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kdq88" Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.924970 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/634e68c8-29a9-4f99-836b-ebc30d51b40d-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-pd4kq\" (UID: \"634e68c8-29a9-4f99-836b-ebc30d51b40d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pd4kq" Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.925044 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/634e68c8-29a9-4f99-836b-ebc30d51b40d-config\") pod \"dnsmasq-dns-785d8bcb8c-pd4kq\" (UID: \"634e68c8-29a9-4f99-836b-ebc30d51b40d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pd4kq" Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.925159 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/634e68c8-29a9-4f99-836b-ebc30d51b40d-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-pd4kq\" (UID: \"634e68c8-29a9-4f99-836b-ebc30d51b40d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pd4kq" Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.925196 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/634e68c8-29a9-4f99-836b-ebc30d51b40d-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-pd4kq\" (UID: \"634e68c8-29a9-4f99-836b-ebc30d51b40d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pd4kq" Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.925227 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/634e68c8-29a9-4f99-836b-ebc30d51b40d-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-pd4kq\" (UID: \"634e68c8-29a9-4f99-836b-ebc30d51b40d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pd4kq" Feb 19 10:27:15 crc kubenswrapper[4811]: I0219 10:27:15.925255 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2jvx\" (UniqueName: \"kubernetes.io/projected/634e68c8-29a9-4f99-836b-ebc30d51b40d-kube-api-access-b2jvx\") pod \"dnsmasq-dns-785d8bcb8c-pd4kq\" (UID: \"634e68c8-29a9-4f99-836b-ebc30d51b40d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pd4kq" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.027640 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/634e68c8-29a9-4f99-836b-ebc30d51b40d-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-pd4kq\" (UID: \"634e68c8-29a9-4f99-836b-ebc30d51b40d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pd4kq" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.027738 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/634e68c8-29a9-4f99-836b-ebc30d51b40d-config\") pod \"dnsmasq-dns-785d8bcb8c-pd4kq\" (UID: \"634e68c8-29a9-4f99-836b-ebc30d51b40d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pd4kq" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.027899 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/634e68c8-29a9-4f99-836b-ebc30d51b40d-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-pd4kq\" (UID: \"634e68c8-29a9-4f99-836b-ebc30d51b40d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pd4kq" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.027927 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/634e68c8-29a9-4f99-836b-ebc30d51b40d-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-pd4kq\" (UID: \"634e68c8-29a9-4f99-836b-ebc30d51b40d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pd4kq" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.027967 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/634e68c8-29a9-4f99-836b-ebc30d51b40d-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-pd4kq\" (UID: \"634e68c8-29a9-4f99-836b-ebc30d51b40d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pd4kq" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.028009 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2jvx\" (UniqueName: \"kubernetes.io/projected/634e68c8-29a9-4f99-836b-ebc30d51b40d-kube-api-access-b2jvx\") pod \"dnsmasq-dns-785d8bcb8c-pd4kq\" (UID: \"634e68c8-29a9-4f99-836b-ebc30d51b40d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pd4kq" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.029120 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/634e68c8-29a9-4f99-836b-ebc30d51b40d-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-pd4kq\" (UID: \"634e68c8-29a9-4f99-836b-ebc30d51b40d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pd4kq" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.029721 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/634e68c8-29a9-4f99-836b-ebc30d51b40d-config\") pod \"dnsmasq-dns-785d8bcb8c-pd4kq\" (UID: \"634e68c8-29a9-4f99-836b-ebc30d51b40d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pd4kq" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.029759 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/634e68c8-29a9-4f99-836b-ebc30d51b40d-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-pd4kq\" (UID: \"634e68c8-29a9-4f99-836b-ebc30d51b40d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pd4kq" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.029827 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/634e68c8-29a9-4f99-836b-ebc30d51b40d-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-pd4kq\" (UID: \"634e68c8-29a9-4f99-836b-ebc30d51b40d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pd4kq" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.029824 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/634e68c8-29a9-4f99-836b-ebc30d51b40d-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-pd4kq\" (UID: \"634e68c8-29a9-4f99-836b-ebc30d51b40d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pd4kq" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.050242 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2jvx\" (UniqueName: \"kubernetes.io/projected/634e68c8-29a9-4f99-836b-ebc30d51b40d-kube-api-access-b2jvx\") pod \"dnsmasq-dns-785d8bcb8c-pd4kq\" (UID: \"634e68c8-29a9-4f99-836b-ebc30d51b40d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pd4kq" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.161509 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-pd4kq" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.581044 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.582822 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.585258 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.585444 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.585720 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-gsjnn" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.599031 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bf7991a-1995-463e-8964-139036b7e6d5" path="/var/lib/kubelet/pods/9bf7991a-1995-463e-8964-139036b7e6d5/volumes" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.599677 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:27:16 crc kubenswrapper[4811]: E0219 10:27:16.690705 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 19 10:27:16 crc kubenswrapper[4811]: E0219 10:27:16.690894 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5b6h555hd8h55dhdh55bh55fh96h666h66chbh65ch65fh588h6h5bfh587h598hdbh6fh668hb6h559hdbh688h686h97h647h684h597h5b8h6cq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-84d7t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-758d556dd9-lgxtx_openstack(89f60543-0966-4b2a-b0d0-beb838040f9e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 10:27:16 crc kubenswrapper[4811]: E0219 10:27:16.692820 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-758d556dd9-lgxtx" podUID="89f60543-0966-4b2a-b0d0-beb838040f9e" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.741871 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.742173 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba1dc49e-b10b-46dc-bfb9-1bb5320e8588-logs\") pod \"glance-default-external-api-0\" (UID: \"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.742755 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba1dc49e-b10b-46dc-bfb9-1bb5320e8588-scripts\") pod \"glance-default-external-api-0\" (UID: \"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.742955 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba1dc49e-b10b-46dc-bfb9-1bb5320e8588-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.742981 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba1dc49e-b10b-46dc-bfb9-1bb5320e8588-config-data\") pod \"glance-default-external-api-0\" (UID: \"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.742999 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94zd8\" (UniqueName: \"kubernetes.io/projected/ba1dc49e-b10b-46dc-bfb9-1bb5320e8588-kube-api-access-94zd8\") pod \"glance-default-external-api-0\" (UID: \"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.743043 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ba1dc49e-b10b-46dc-bfb9-1bb5320e8588-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.845310 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.845770 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba1dc49e-b10b-46dc-bfb9-1bb5320e8588-logs\") pod \"glance-default-external-api-0\" (UID: \"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.845817 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba1dc49e-b10b-46dc-bfb9-1bb5320e8588-scripts\") pod \"glance-default-external-api-0\" (UID: \"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.845817 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.846084 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba1dc49e-b10b-46dc-bfb9-1bb5320e8588-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.846133 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94zd8\" (UniqueName: \"kubernetes.io/projected/ba1dc49e-b10b-46dc-bfb9-1bb5320e8588-kube-api-access-94zd8\") pod \"glance-default-external-api-0\" (UID: \"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.846157 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba1dc49e-b10b-46dc-bfb9-1bb5320e8588-config-data\") pod \"glance-default-external-api-0\" (UID: \"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.846211 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ba1dc49e-b10b-46dc-bfb9-1bb5320e8588-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.846913 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ba1dc49e-b10b-46dc-bfb9-1bb5320e8588-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.847732 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba1dc49e-b10b-46dc-bfb9-1bb5320e8588-logs\") pod \"glance-default-external-api-0\" (UID: \"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.859511 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba1dc49e-b10b-46dc-bfb9-1bb5320e8588-config-data\") pod \"glance-default-external-api-0\" (UID: \"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.860204 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba1dc49e-b10b-46dc-bfb9-1bb5320e8588-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.864141 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94zd8\" (UniqueName: \"kubernetes.io/projected/ba1dc49e-b10b-46dc-bfb9-1bb5320e8588-kube-api-access-94zd8\") pod \"glance-default-external-api-0\" (UID: \"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.873827 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba1dc49e-b10b-46dc-bfb9-1bb5320e8588-scripts\") pod \"glance-default-external-api-0\" (UID: \"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.892340 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:16 crc kubenswrapper[4811]: I0219 10:27:16.923273 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 10:27:17 crc kubenswrapper[4811]: I0219 10:27:17.032318 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:27:17 crc kubenswrapper[4811]: I0219 10:27:17.033774 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 10:27:17 crc kubenswrapper[4811]: I0219 10:27:17.036364 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 10:27:17 crc kubenswrapper[4811]: I0219 10:27:17.079998 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:27:17 crc kubenswrapper[4811]: I0219 10:27:17.154314 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abf84b4a-531a-423e-8ca0-6813a3581f98-logs\") pod \"glance-default-internal-api-0\" (UID: \"abf84b4a-531a-423e-8ca0-6813a3581f98\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:17 crc kubenswrapper[4811]: I0219 10:27:17.154440 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g652j\" (UniqueName: \"kubernetes.io/projected/abf84b4a-531a-423e-8ca0-6813a3581f98-kube-api-access-g652j\") pod \"glance-default-internal-api-0\" (UID: \"abf84b4a-531a-423e-8ca0-6813a3581f98\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:17 crc kubenswrapper[4811]: I0219 10:27:17.154541 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abf84b4a-531a-423e-8ca0-6813a3581f98-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"abf84b4a-531a-423e-8ca0-6813a3581f98\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:17 crc kubenswrapper[4811]: I0219 10:27:17.154641 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/abf84b4a-531a-423e-8ca0-6813a3581f98-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"abf84b4a-531a-423e-8ca0-6813a3581f98\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:17 crc kubenswrapper[4811]: I0219 10:27:17.154741 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abf84b4a-531a-423e-8ca0-6813a3581f98-scripts\") pod \"glance-default-internal-api-0\" (UID: \"abf84b4a-531a-423e-8ca0-6813a3581f98\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:17 crc kubenswrapper[4811]: I0219 10:27:17.155310 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abf84b4a-531a-423e-8ca0-6813a3581f98-config-data\") pod \"glance-default-internal-api-0\" (UID: \"abf84b4a-531a-423e-8ca0-6813a3581f98\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:17 crc kubenswrapper[4811]: I0219 10:27:17.155371 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"abf84b4a-531a-423e-8ca0-6813a3581f98\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:17 crc kubenswrapper[4811]: I0219 10:27:17.257073 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abf84b4a-531a-423e-8ca0-6813a3581f98-logs\") pod \"glance-default-internal-api-0\" (UID: \"abf84b4a-531a-423e-8ca0-6813a3581f98\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:17 crc kubenswrapper[4811]: I0219 10:27:17.257141 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g652j\" (UniqueName: \"kubernetes.io/projected/abf84b4a-531a-423e-8ca0-6813a3581f98-kube-api-access-g652j\") pod \"glance-default-internal-api-0\" (UID: \"abf84b4a-531a-423e-8ca0-6813a3581f98\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:17 crc kubenswrapper[4811]: I0219 10:27:17.257171 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abf84b4a-531a-423e-8ca0-6813a3581f98-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"abf84b4a-531a-423e-8ca0-6813a3581f98\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:17 crc kubenswrapper[4811]: I0219 10:27:17.257194 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/abf84b4a-531a-423e-8ca0-6813a3581f98-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"abf84b4a-531a-423e-8ca0-6813a3581f98\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:17 crc kubenswrapper[4811]: I0219 10:27:17.257230 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abf84b4a-531a-423e-8ca0-6813a3581f98-scripts\") pod \"glance-default-internal-api-0\" (UID: \"abf84b4a-531a-423e-8ca0-6813a3581f98\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:17 crc kubenswrapper[4811]: I0219 10:27:17.257301 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abf84b4a-531a-423e-8ca0-6813a3581f98-config-data\") pod \"glance-default-internal-api-0\" (UID: \"abf84b4a-531a-423e-8ca0-6813a3581f98\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:17 crc kubenswrapper[4811]: I0219 10:27:17.257323 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"abf84b4a-531a-423e-8ca0-6813a3581f98\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:17 crc kubenswrapper[4811]: I0219 10:27:17.257623 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abf84b4a-531a-423e-8ca0-6813a3581f98-logs\") pod \"glance-default-internal-api-0\" (UID: \"abf84b4a-531a-423e-8ca0-6813a3581f98\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:17 crc kubenswrapper[4811]: I0219 10:27:17.257632 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"abf84b4a-531a-423e-8ca0-6813a3581f98\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Feb 19 10:27:17 crc kubenswrapper[4811]: I0219 10:27:17.257706 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/abf84b4a-531a-423e-8ca0-6813a3581f98-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"abf84b4a-531a-423e-8ca0-6813a3581f98\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:17 crc kubenswrapper[4811]: I0219 10:27:17.262057 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abf84b4a-531a-423e-8ca0-6813a3581f98-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"abf84b4a-531a-423e-8ca0-6813a3581f98\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:17 crc kubenswrapper[4811]: I0219 10:27:17.262626 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abf84b4a-531a-423e-8ca0-6813a3581f98-scripts\") pod \"glance-default-internal-api-0\" (UID: \"abf84b4a-531a-423e-8ca0-6813a3581f98\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:17 crc kubenswrapper[4811]: I0219 10:27:17.265793 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abf84b4a-531a-423e-8ca0-6813a3581f98-config-data\") pod \"glance-default-internal-api-0\" (UID: \"abf84b4a-531a-423e-8ca0-6813a3581f98\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:17 crc kubenswrapper[4811]: I0219 10:27:17.274985 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g652j\" (UniqueName: \"kubernetes.io/projected/abf84b4a-531a-423e-8ca0-6813a3581f98-kube-api-access-g652j\") pod \"glance-default-internal-api-0\" (UID: \"abf84b4a-531a-423e-8ca0-6813a3581f98\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:17 crc kubenswrapper[4811]: I0219 10:27:17.293303 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"abf84b4a-531a-423e-8ca0-6813a3581f98\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:17 crc kubenswrapper[4811]: I0219 10:27:17.373346 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 10:27:18 crc kubenswrapper[4811]: I0219 10:27:18.462702 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:27:18 crc kubenswrapper[4811]: I0219 10:27:18.532406 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:27:21 crc kubenswrapper[4811]: I0219 10:27:21.801626 4811 generic.go:334] "Generic (PLEG): container finished" podID="1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c" containerID="6802d851187ba13a812916ff101da5888a68bf72b1ff5782bef294df6f713955" exitCode=0 Feb 19 10:27:21 crc kubenswrapper[4811]: I0219 10:27:21.801753 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-8skw7" event={"ID":"1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c","Type":"ContainerDied","Data":"6802d851187ba13a812916ff101da5888a68bf72b1ff5782bef294df6f713955"} Feb 19 10:27:23 crc kubenswrapper[4811]: I0219 10:27:23.989135 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-lwl9w" podUID="15311a54-a0df-4c3d-b58f-384839b49b34" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.132:5353: i/o timeout" Feb 19 10:27:23 crc kubenswrapper[4811]: I0219 10:27:23.989758 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-lwl9w" Feb 19 10:27:28 crc kubenswrapper[4811]: E0219 10:27:28.975437 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 19 10:27:28 crc kubenswrapper[4811]: E0219 10:27:28.976351 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n95h5cbh5b5h687hf9h78h567h67dh94h5bh8hbh96hb9h65ch65ch646h656hc6h5dch579hf6h5fh5f9h558h54dh589hfbh575h89h58bh666q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j97vh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(45fe4f69-83fa-452c-a5fd-c993af1d0ef9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 10:27:28 crc kubenswrapper[4811]: I0219 10:27:28.990556 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-lwl9w" podUID="15311a54-a0df-4c3d-b58f-384839b49b34" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.132:5353: i/o timeout" Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.117874 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-lwl9w" Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.123680 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-758d556dd9-lgxtx" Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.208373 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89f60543-0966-4b2a-b0d0-beb838040f9e-config-data\") pod \"89f60543-0966-4b2a-b0d0-beb838040f9e\" (UID: \"89f60543-0966-4b2a-b0d0-beb838040f9e\") " Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.208859 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/15311a54-a0df-4c3d-b58f-384839b49b34-dns-swift-storage-0\") pod \"15311a54-a0df-4c3d-b58f-384839b49b34\" (UID: \"15311a54-a0df-4c3d-b58f-384839b49b34\") " Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.208919 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15311a54-a0df-4c3d-b58f-384839b49b34-ovsdbserver-nb\") pod \"15311a54-a0df-4c3d-b58f-384839b49b34\" (UID: \"15311a54-a0df-4c3d-b58f-384839b49b34\") " Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.208944 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89f60543-0966-4b2a-b0d0-beb838040f9e-scripts\") pod \"89f60543-0966-4b2a-b0d0-beb838040f9e\" (UID: \"89f60543-0966-4b2a-b0d0-beb838040f9e\") " Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.209041 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/15311a54-a0df-4c3d-b58f-384839b49b34-ovsdbserver-sb\") pod \"15311a54-a0df-4c3d-b58f-384839b49b34\" (UID: \"15311a54-a0df-4c3d-b58f-384839b49b34\") " Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.209061 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15311a54-a0df-4c3d-b58f-384839b49b34-config\") pod \"15311a54-a0df-4c3d-b58f-384839b49b34\" (UID: \"15311a54-a0df-4c3d-b58f-384839b49b34\") " Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.209134 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89f60543-0966-4b2a-b0d0-beb838040f9e-logs\") pod \"89f60543-0966-4b2a-b0d0-beb838040f9e\" (UID: \"89f60543-0966-4b2a-b0d0-beb838040f9e\") " Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.209199 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/89f60543-0966-4b2a-b0d0-beb838040f9e-horizon-secret-key\") pod \"89f60543-0966-4b2a-b0d0-beb838040f9e\" (UID: \"89f60543-0966-4b2a-b0d0-beb838040f9e\") " Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.209240 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnq8f\" (UniqueName: \"kubernetes.io/projected/15311a54-a0df-4c3d-b58f-384839b49b34-kube-api-access-gnq8f\") pod \"15311a54-a0df-4c3d-b58f-384839b49b34\" (UID: \"15311a54-a0df-4c3d-b58f-384839b49b34\") " Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.209294 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84d7t\" (UniqueName: \"kubernetes.io/projected/89f60543-0966-4b2a-b0d0-beb838040f9e-kube-api-access-84d7t\") pod \"89f60543-0966-4b2a-b0d0-beb838040f9e\" (UID: \"89f60543-0966-4b2a-b0d0-beb838040f9e\") " Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.209324 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15311a54-a0df-4c3d-b58f-384839b49b34-dns-svc\") pod \"15311a54-a0df-4c3d-b58f-384839b49b34\" (UID: \"15311a54-a0df-4c3d-b58f-384839b49b34\") " Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.223097 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89f60543-0966-4b2a-b0d0-beb838040f9e-logs" (OuterVolumeSpecName: "logs") pod "89f60543-0966-4b2a-b0d0-beb838040f9e" (UID: "89f60543-0966-4b2a-b0d0-beb838040f9e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.226513 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89f60543-0966-4b2a-b0d0-beb838040f9e-config-data" (OuterVolumeSpecName: "config-data") pod "89f60543-0966-4b2a-b0d0-beb838040f9e" (UID: "89f60543-0966-4b2a-b0d0-beb838040f9e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.227015 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89f60543-0966-4b2a-b0d0-beb838040f9e-scripts" (OuterVolumeSpecName: "scripts") pod "89f60543-0966-4b2a-b0d0-beb838040f9e" (UID: "89f60543-0966-4b2a-b0d0-beb838040f9e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.234294 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89f60543-0966-4b2a-b0d0-beb838040f9e-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "89f60543-0966-4b2a-b0d0-beb838040f9e" (UID: "89f60543-0966-4b2a-b0d0-beb838040f9e"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.235151 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15311a54-a0df-4c3d-b58f-384839b49b34-kube-api-access-gnq8f" (OuterVolumeSpecName: "kube-api-access-gnq8f") pod "15311a54-a0df-4c3d-b58f-384839b49b34" (UID: "15311a54-a0df-4c3d-b58f-384839b49b34"). InnerVolumeSpecName "kube-api-access-gnq8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.237050 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89f60543-0966-4b2a-b0d0-beb838040f9e-kube-api-access-84d7t" (OuterVolumeSpecName: "kube-api-access-84d7t") pod "89f60543-0966-4b2a-b0d0-beb838040f9e" (UID: "89f60543-0966-4b2a-b0d0-beb838040f9e"). InnerVolumeSpecName "kube-api-access-84d7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.270744 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15311a54-a0df-4c3d-b58f-384839b49b34-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "15311a54-a0df-4c3d-b58f-384839b49b34" (UID: "15311a54-a0df-4c3d-b58f-384839b49b34"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.275051 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15311a54-a0df-4c3d-b58f-384839b49b34-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "15311a54-a0df-4c3d-b58f-384839b49b34" (UID: "15311a54-a0df-4c3d-b58f-384839b49b34"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.282525 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15311a54-a0df-4c3d-b58f-384839b49b34-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "15311a54-a0df-4c3d-b58f-384839b49b34" (UID: "15311a54-a0df-4c3d-b58f-384839b49b34"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.287044 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15311a54-a0df-4c3d-b58f-384839b49b34-config" (OuterVolumeSpecName: "config") pod "15311a54-a0df-4c3d-b58f-384839b49b34" (UID: "15311a54-a0df-4c3d-b58f-384839b49b34"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.290029 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15311a54-a0df-4c3d-b58f-384839b49b34-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "15311a54-a0df-4c3d-b58f-384839b49b34" (UID: "15311a54-a0df-4c3d-b58f-384839b49b34"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.313946 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89f60543-0966-4b2a-b0d0-beb838040f9e-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.313980 4811 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/89f60543-0966-4b2a-b0d0-beb838040f9e-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.313992 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnq8f\" (UniqueName: \"kubernetes.io/projected/15311a54-a0df-4c3d-b58f-384839b49b34-kube-api-access-gnq8f\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.314002 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84d7t\" (UniqueName: \"kubernetes.io/projected/89f60543-0966-4b2a-b0d0-beb838040f9e-kube-api-access-84d7t\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.314012 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15311a54-a0df-4c3d-b58f-384839b49b34-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.314021 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89f60543-0966-4b2a-b0d0-beb838040f9e-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.314030 4811 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/15311a54-a0df-4c3d-b58f-384839b49b34-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.314038 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15311a54-a0df-4c3d-b58f-384839b49b34-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.314045 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89f60543-0966-4b2a-b0d0-beb838040f9e-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.314053 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/15311a54-a0df-4c3d-b58f-384839b49b34-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.314060 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15311a54-a0df-4c3d-b58f-384839b49b34-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:29 crc kubenswrapper[4811]: E0219 10:27:29.656161 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 19 10:27:29 crc kubenswrapper[4811]: E0219 10:27:29.656479 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-272xl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-xggvg_openstack(9741f82c-1dce-4e95-9442-c424bcf8927d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 10:27:29 crc kubenswrapper[4811]: E0219 10:27:29.658163 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-xggvg" podUID="9741f82c-1dce-4e95-9442-c424bcf8927d" Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.738422 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-8skw7" Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.822849 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c-combined-ca-bundle\") pod \"1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c\" (UID: \"1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c\") " Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.823161 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c-config\") pod \"1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c\" (UID: \"1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c\") " Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.823205 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bg6cp\" (UniqueName: \"kubernetes.io/projected/1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c-kube-api-access-bg6cp\") pod \"1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c\" (UID: \"1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c\") " Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.833227 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c-kube-api-access-bg6cp" (OuterVolumeSpecName: "kube-api-access-bg6cp") pod "1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c" (UID: "1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c"). InnerVolumeSpecName "kube-api-access-bg6cp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.850507 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c" (UID: "1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.854069 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c-config" (OuterVolumeSpecName: "config") pod "1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c" (UID: "1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.903744 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-lwl9w" Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.903749 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-lwl9w" event={"ID":"15311a54-a0df-4c3d-b58f-384839b49b34","Type":"ContainerDied","Data":"b89b052b6d51778cc78e423573acc6b17528770c64cda27332bd2fbdde968c3f"} Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.904425 4811 scope.go:117] "RemoveContainer" containerID="0c42924b955521195924491430f123cf78cd53674446d731e511f62fd5c6350f" Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.911282 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-8skw7" event={"ID":"1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c","Type":"ContainerDied","Data":"e4c4770c7c1a39a8fab0bd8a56313baaaca88ef1d05bb895b3f50fa2fa7d3b2c"} Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.911361 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4c4770c7c1a39a8fab0bd8a56313baaaca88ef1d05bb895b3f50fa2fa7d3b2c" Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.911525 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-8skw7" Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.915151 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-758d556dd9-lgxtx" Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.917371 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-758d556dd9-lgxtx" event={"ID":"89f60543-0966-4b2a-b0d0-beb838040f9e","Type":"ContainerDied","Data":"94a83c648a7993bed5db7a96fdbc44a9323e8526b421d755d5dfc357298923a7"} Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.929752 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.929831 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bg6cp\" (UniqueName: \"kubernetes.io/projected/1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c-kube-api-access-bg6cp\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.929908 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:29 crc kubenswrapper[4811]: E0219 10:27:29.938966 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-xggvg" podUID="9741f82c-1dce-4e95-9442-c424bcf8927d" Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.984921 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-lwl9w"] Feb 19 10:27:29 crc kubenswrapper[4811]: I0219 10:27:29.994654 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-lwl9w"] Feb 19 10:27:30 crc kubenswrapper[4811]: I0219 10:27:30.008548 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-758d556dd9-lgxtx"] Feb 19 10:27:30 crc kubenswrapper[4811]: I0219 10:27:30.015012 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-758d556dd9-lgxtx"] Feb 19 10:27:30 crc kubenswrapper[4811]: I0219 10:27:30.608839 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15311a54-a0df-4c3d-b58f-384839b49b34" path="/var/lib/kubelet/pods/15311a54-a0df-4c3d-b58f-384839b49b34/volumes" Feb 19 10:27:30 crc kubenswrapper[4811]: I0219 10:27:30.609584 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89f60543-0966-4b2a-b0d0-beb838040f9e" path="/var/lib/kubelet/pods/89f60543-0966-4b2a-b0d0-beb838040f9e/volumes" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.006792 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-pd4kq"] Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.041974 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-f5hrk"] Feb 19 10:27:31 crc kubenswrapper[4811]: E0219 10:27:31.042340 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15311a54-a0df-4c3d-b58f-384839b49b34" containerName="init" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.042354 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="15311a54-a0df-4c3d-b58f-384839b49b34" containerName="init" Feb 19 10:27:31 crc kubenswrapper[4811]: E0219 10:27:31.042376 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c" containerName="neutron-db-sync" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.042383 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c" containerName="neutron-db-sync" Feb 19 10:27:31 crc kubenswrapper[4811]: E0219 10:27:31.042394 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15311a54-a0df-4c3d-b58f-384839b49b34" containerName="dnsmasq-dns" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.042400 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="15311a54-a0df-4c3d-b58f-384839b49b34" containerName="dnsmasq-dns" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.042574 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="15311a54-a0df-4c3d-b58f-384839b49b34" containerName="dnsmasq-dns" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.042589 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c" containerName="neutron-db-sync" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.043428 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-f5hrk" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.094536 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-f5hrk"] Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.169269 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4500564b-03cc-41f6-9c64-8601e8e279b4-config\") pod \"dnsmasq-dns-55f844cf75-f5hrk\" (UID: \"4500564b-03cc-41f6-9c64-8601e8e279b4\") " pod="openstack/dnsmasq-dns-55f844cf75-f5hrk" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.169318 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4500564b-03cc-41f6-9c64-8601e8e279b4-dns-svc\") pod \"dnsmasq-dns-55f844cf75-f5hrk\" (UID: \"4500564b-03cc-41f6-9c64-8601e8e279b4\") " pod="openstack/dnsmasq-dns-55f844cf75-f5hrk" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.169381 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjkhx\" (UniqueName: \"kubernetes.io/projected/4500564b-03cc-41f6-9c64-8601e8e279b4-kube-api-access-rjkhx\") pod \"dnsmasq-dns-55f844cf75-f5hrk\" (UID: \"4500564b-03cc-41f6-9c64-8601e8e279b4\") " pod="openstack/dnsmasq-dns-55f844cf75-f5hrk" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.169404 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4500564b-03cc-41f6-9c64-8601e8e279b4-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-f5hrk\" (UID: \"4500564b-03cc-41f6-9c64-8601e8e279b4\") " pod="openstack/dnsmasq-dns-55f844cf75-f5hrk" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.169427 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4500564b-03cc-41f6-9c64-8601e8e279b4-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-f5hrk\" (UID: \"4500564b-03cc-41f6-9c64-8601e8e279b4\") " pod="openstack/dnsmasq-dns-55f844cf75-f5hrk" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.169610 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4500564b-03cc-41f6-9c64-8601e8e279b4-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-f5hrk\" (UID: \"4500564b-03cc-41f6-9c64-8601e8e279b4\") " pod="openstack/dnsmasq-dns-55f844cf75-f5hrk" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.271927 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4500564b-03cc-41f6-9c64-8601e8e279b4-dns-svc\") pod \"dnsmasq-dns-55f844cf75-f5hrk\" (UID: \"4500564b-03cc-41f6-9c64-8601e8e279b4\") " pod="openstack/dnsmasq-dns-55f844cf75-f5hrk" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.272055 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjkhx\" (UniqueName: \"kubernetes.io/projected/4500564b-03cc-41f6-9c64-8601e8e279b4-kube-api-access-rjkhx\") pod \"dnsmasq-dns-55f844cf75-f5hrk\" (UID: \"4500564b-03cc-41f6-9c64-8601e8e279b4\") " pod="openstack/dnsmasq-dns-55f844cf75-f5hrk" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.272134 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4500564b-03cc-41f6-9c64-8601e8e279b4-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-f5hrk\" (UID: \"4500564b-03cc-41f6-9c64-8601e8e279b4\") " pod="openstack/dnsmasq-dns-55f844cf75-f5hrk" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.272177 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4500564b-03cc-41f6-9c64-8601e8e279b4-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-f5hrk\" (UID: \"4500564b-03cc-41f6-9c64-8601e8e279b4\") " pod="openstack/dnsmasq-dns-55f844cf75-f5hrk" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.272205 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4500564b-03cc-41f6-9c64-8601e8e279b4-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-f5hrk\" (UID: \"4500564b-03cc-41f6-9c64-8601e8e279b4\") " pod="openstack/dnsmasq-dns-55f844cf75-f5hrk" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.272280 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4500564b-03cc-41f6-9c64-8601e8e279b4-config\") pod \"dnsmasq-dns-55f844cf75-f5hrk\" (UID: \"4500564b-03cc-41f6-9c64-8601e8e279b4\") " pod="openstack/dnsmasq-dns-55f844cf75-f5hrk" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.275372 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4500564b-03cc-41f6-9c64-8601e8e279b4-config\") pod \"dnsmasq-dns-55f844cf75-f5hrk\" (UID: \"4500564b-03cc-41f6-9c64-8601e8e279b4\") " pod="openstack/dnsmasq-dns-55f844cf75-f5hrk" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.277182 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-69cb98ff68-7rhp9"] Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.284238 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4500564b-03cc-41f6-9c64-8601e8e279b4-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-f5hrk\" (UID: \"4500564b-03cc-41f6-9c64-8601e8e279b4\") " pod="openstack/dnsmasq-dns-55f844cf75-f5hrk" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.284362 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4500564b-03cc-41f6-9c64-8601e8e279b4-dns-svc\") pod \"dnsmasq-dns-55f844cf75-f5hrk\" (UID: \"4500564b-03cc-41f6-9c64-8601e8e279b4\") " pod="openstack/dnsmasq-dns-55f844cf75-f5hrk" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.285611 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-69cb98ff68-7rhp9" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.285773 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4500564b-03cc-41f6-9c64-8601e8e279b4-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-f5hrk\" (UID: \"4500564b-03cc-41f6-9c64-8601e8e279b4\") " pod="openstack/dnsmasq-dns-55f844cf75-f5hrk" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.285902 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4500564b-03cc-41f6-9c64-8601e8e279b4-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-f5hrk\" (UID: \"4500564b-03cc-41f6-9c64-8601e8e279b4\") " pod="openstack/dnsmasq-dns-55f844cf75-f5hrk" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.288728 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.289048 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.289164 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-rkn7w" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.289277 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.291550 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-69cb98ff68-7rhp9"] Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.321460 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjkhx\" (UniqueName: \"kubernetes.io/projected/4500564b-03cc-41f6-9c64-8601e8e279b4-kube-api-access-rjkhx\") pod \"dnsmasq-dns-55f844cf75-f5hrk\" (UID: \"4500564b-03cc-41f6-9c64-8601e8e279b4\") " pod="openstack/dnsmasq-dns-55f844cf75-f5hrk" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.374138 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzg25\" (UniqueName: \"kubernetes.io/projected/fb83c4af-999b-46bc-89b8-f0909ccb3f14-kube-api-access-gzg25\") pod \"neutron-69cb98ff68-7rhp9\" (UID: \"fb83c4af-999b-46bc-89b8-f0909ccb3f14\") " pod="openstack/neutron-69cb98ff68-7rhp9" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.374297 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb83c4af-999b-46bc-89b8-f0909ccb3f14-config\") pod \"neutron-69cb98ff68-7rhp9\" (UID: \"fb83c4af-999b-46bc-89b8-f0909ccb3f14\") " pod="openstack/neutron-69cb98ff68-7rhp9" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.374403 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb83c4af-999b-46bc-89b8-f0909ccb3f14-ovndb-tls-certs\") pod \"neutron-69cb98ff68-7rhp9\" (UID: \"fb83c4af-999b-46bc-89b8-f0909ccb3f14\") " pod="openstack/neutron-69cb98ff68-7rhp9" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.374607 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb83c4af-999b-46bc-89b8-f0909ccb3f14-combined-ca-bundle\") pod \"neutron-69cb98ff68-7rhp9\" (UID: \"fb83c4af-999b-46bc-89b8-f0909ccb3f14\") " pod="openstack/neutron-69cb98ff68-7rhp9" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.374681 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fb83c4af-999b-46bc-89b8-f0909ccb3f14-httpd-config\") pod \"neutron-69cb98ff68-7rhp9\" (UID: \"fb83c4af-999b-46bc-89b8-f0909ccb3f14\") " pod="openstack/neutron-69cb98ff68-7rhp9" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.381513 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-f5hrk" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.476522 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb83c4af-999b-46bc-89b8-f0909ccb3f14-combined-ca-bundle\") pod \"neutron-69cb98ff68-7rhp9\" (UID: \"fb83c4af-999b-46bc-89b8-f0909ccb3f14\") " pod="openstack/neutron-69cb98ff68-7rhp9" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.476595 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fb83c4af-999b-46bc-89b8-f0909ccb3f14-httpd-config\") pod \"neutron-69cb98ff68-7rhp9\" (UID: \"fb83c4af-999b-46bc-89b8-f0909ccb3f14\") " pod="openstack/neutron-69cb98ff68-7rhp9" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.476669 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzg25\" (UniqueName: \"kubernetes.io/projected/fb83c4af-999b-46bc-89b8-f0909ccb3f14-kube-api-access-gzg25\") pod \"neutron-69cb98ff68-7rhp9\" (UID: \"fb83c4af-999b-46bc-89b8-f0909ccb3f14\") " pod="openstack/neutron-69cb98ff68-7rhp9" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.476776 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb83c4af-999b-46bc-89b8-f0909ccb3f14-config\") pod \"neutron-69cb98ff68-7rhp9\" (UID: \"fb83c4af-999b-46bc-89b8-f0909ccb3f14\") " pod="openstack/neutron-69cb98ff68-7rhp9" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.476815 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb83c4af-999b-46bc-89b8-f0909ccb3f14-ovndb-tls-certs\") pod \"neutron-69cb98ff68-7rhp9\" (UID: \"fb83c4af-999b-46bc-89b8-f0909ccb3f14\") " pod="openstack/neutron-69cb98ff68-7rhp9" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.481071 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fb83c4af-999b-46bc-89b8-f0909ccb3f14-httpd-config\") pod \"neutron-69cb98ff68-7rhp9\" (UID: \"fb83c4af-999b-46bc-89b8-f0909ccb3f14\") " pod="openstack/neutron-69cb98ff68-7rhp9" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.481432 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb83c4af-999b-46bc-89b8-f0909ccb3f14-ovndb-tls-certs\") pod \"neutron-69cb98ff68-7rhp9\" (UID: \"fb83c4af-999b-46bc-89b8-f0909ccb3f14\") " pod="openstack/neutron-69cb98ff68-7rhp9" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.481440 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb83c4af-999b-46bc-89b8-f0909ccb3f14-combined-ca-bundle\") pod \"neutron-69cb98ff68-7rhp9\" (UID: \"fb83c4af-999b-46bc-89b8-f0909ccb3f14\") " pod="openstack/neutron-69cb98ff68-7rhp9" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.484317 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb83c4af-999b-46bc-89b8-f0909ccb3f14-config\") pod \"neutron-69cb98ff68-7rhp9\" (UID: \"fb83c4af-999b-46bc-89b8-f0909ccb3f14\") " pod="openstack/neutron-69cb98ff68-7rhp9" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.499372 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzg25\" (UniqueName: \"kubernetes.io/projected/fb83c4af-999b-46bc-89b8-f0909ccb3f14-kube-api-access-gzg25\") pod \"neutron-69cb98ff68-7rhp9\" (UID: \"fb83c4af-999b-46bc-89b8-f0909ccb3f14\") " pod="openstack/neutron-69cb98ff68-7rhp9" Feb 19 10:27:31 crc kubenswrapper[4811]: I0219 10:27:31.677790 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-69cb98ff68-7rhp9" Feb 19 10:27:32 crc kubenswrapper[4811]: E0219 10:27:32.324808 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 19 10:27:32 crc kubenswrapper[4811]: E0219 10:27:32.325410 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7495v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-rqgzq_openstack(d369ea05-6a36-44fd-bdd0-26eead9ba16d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 10:27:32 crc kubenswrapper[4811]: E0219 10:27:32.326912 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-rqgzq" podUID="d369ea05-6a36-44fd-bdd0-26eead9ba16d" Feb 19 10:27:32 crc kubenswrapper[4811]: I0219 10:27:32.351981 4811 scope.go:117] "RemoveContainer" containerID="cda29eb6030af822ff4fce6213a5a7ba4f3fbd331e2feebf60e0c57ac95fc6bf" Feb 19 10:27:32 crc kubenswrapper[4811]: I0219 10:27:32.863204 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-676dcb6496-tnhmt"] Feb 19 10:27:32 crc kubenswrapper[4811]: W0219 10:27:32.920171 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19d17501_8d5f_4f9d_8fad_94f48224e910.slice/crio-0de00a03ba29eb18eebb80901d2ab6c96fbf9d5c12c6dd80a2a23646d2b72b81 WatchSource:0}: Error finding container 0de00a03ba29eb18eebb80901d2ab6c96fbf9d5c12c6dd80a2a23646d2b72b81: Status 404 returned error can't find the container with id 0de00a03ba29eb18eebb80901d2ab6c96fbf9d5c12c6dd80a2a23646d2b72b81 Feb 19 10:27:32 crc kubenswrapper[4811]: I0219 10:27:32.960331 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-676dcb6496-tnhmt" event={"ID":"19d17501-8d5f-4f9d-8fad-94f48224e910","Type":"ContainerStarted","Data":"0de00a03ba29eb18eebb80901d2ab6c96fbf9d5c12c6dd80a2a23646d2b72b81"} Feb 19 10:27:33 crc kubenswrapper[4811]: I0219 10:27:33.011552 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-pd4kq"] Feb 19 10:27:33 crc kubenswrapper[4811]: E0219 10:27:33.050924 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-rqgzq" podUID="d369ea05-6a36-44fd-bdd0-26eead9ba16d" Feb 19 10:27:33 crc kubenswrapper[4811]: W0219 10:27:33.093233 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod634e68c8_29a9_4f99_836b_ebc30d51b40d.slice/crio-42b1ae242914d67bf2e0a9664e3ea80226498ee452eac08c8723802342e9e58c WatchSource:0}: Error finding container 42b1ae242914d67bf2e0a9664e3ea80226498ee452eac08c8723802342e9e58c: Status 404 returned error can't find the container with id 42b1ae242914d67bf2e0a9664e3ea80226498ee452eac08c8723802342e9e58c Feb 19 10:27:33 crc kubenswrapper[4811]: I0219 10:27:33.204326 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-kdq88"] Feb 19 10:27:33 crc kubenswrapper[4811]: W0219 10:27:33.218677 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81ad234d_f37a_45e3_9b76_c45aabd896ae.slice/crio-1e17e6fea553904e4a65f751dc0d964e361f6f8f9e2704803a39a128ee87276a WatchSource:0}: Error finding container 1e17e6fea553904e4a65f751dc0d964e361f6f8f9e2704803a39a128ee87276a: Status 404 returned error can't find the container with id 1e17e6fea553904e4a65f751dc0d964e361f6f8f9e2704803a39a128ee87276a Feb 19 10:27:33 crc kubenswrapper[4811]: I0219 10:27:33.221275 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-766dfcbd56-lqbrt"] Feb 19 10:27:33 crc kubenswrapper[4811]: I0219 10:27:33.262936 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 10:27:33 crc kubenswrapper[4811]: W0219 10:27:33.264063 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ab45e9b_6cfc_48a3_b603_f47d47bbd510.slice/crio-6130ebc3a1b7ccf94b7c9699df9cf15b0e03b26cbeecfcc5bd6d0b4b43ad391a WatchSource:0}: Error finding container 6130ebc3a1b7ccf94b7c9699df9cf15b0e03b26cbeecfcc5bd6d0b4b43ad391a: Status 404 returned error can't find the container with id 6130ebc3a1b7ccf94b7c9699df9cf15b0e03b26cbeecfcc5bd6d0b4b43ad391a Feb 19 10:27:33 crc kubenswrapper[4811]: I0219 10:27:33.435937 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:27:33 crc kubenswrapper[4811]: I0219 10:27:33.651500 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:27:33 crc kubenswrapper[4811]: I0219 10:27:33.763233 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-69cb98ff68-7rhp9"] Feb 19 10:27:33 crc kubenswrapper[4811]: I0219 10:27:33.833873 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-56b7c77497-mp97q"] Feb 19 10:27:33 crc kubenswrapper[4811]: I0219 10:27:33.839812 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56b7c77497-mp97q" Feb 19 10:27:33 crc kubenswrapper[4811]: I0219 10:27:33.843336 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 19 10:27:33 crc kubenswrapper[4811]: I0219 10:27:33.843732 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 19 10:27:33 crc kubenswrapper[4811]: I0219 10:27:33.896659 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-56b7c77497-mp97q"] Feb 19 10:27:33 crc kubenswrapper[4811]: W0219 10:27:33.933741 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4500564b_03cc_41f6_9c64_8601e8e279b4.slice/crio-b151de5c0e60588d0d56aca4b12a477d32dd2ff0aec4563a274262af967a3a4f WatchSource:0}: Error finding container b151de5c0e60588d0d56aca4b12a477d32dd2ff0aec4563a274262af967a3a4f: Status 404 returned error can't find the container with id b151de5c0e60588d0d56aca4b12a477d32dd2ff0aec4563a274262af967a3a4f Feb 19 10:27:33 crc kubenswrapper[4811]: I0219 10:27:33.939400 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/970dbfc5-9f21-4adc-93c3-9d9175826a48-ovndb-tls-certs\") pod \"neutron-56b7c77497-mp97q\" (UID: \"970dbfc5-9f21-4adc-93c3-9d9175826a48\") " pod="openstack/neutron-56b7c77497-mp97q" Feb 19 10:27:33 crc kubenswrapper[4811]: I0219 10:27:33.939567 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/970dbfc5-9f21-4adc-93c3-9d9175826a48-internal-tls-certs\") pod \"neutron-56b7c77497-mp97q\" (UID: \"970dbfc5-9f21-4adc-93c3-9d9175826a48\") " pod="openstack/neutron-56b7c77497-mp97q" Feb 19 10:27:33 crc kubenswrapper[4811]: I0219 10:27:33.939679 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/970dbfc5-9f21-4adc-93c3-9d9175826a48-config\") pod \"neutron-56b7c77497-mp97q\" (UID: \"970dbfc5-9f21-4adc-93c3-9d9175826a48\") " pod="openstack/neutron-56b7c77497-mp97q" Feb 19 10:27:33 crc kubenswrapper[4811]: I0219 10:27:33.939798 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/970dbfc5-9f21-4adc-93c3-9d9175826a48-httpd-config\") pod \"neutron-56b7c77497-mp97q\" (UID: \"970dbfc5-9f21-4adc-93c3-9d9175826a48\") " pod="openstack/neutron-56b7c77497-mp97q" Feb 19 10:27:33 crc kubenswrapper[4811]: I0219 10:27:33.942536 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/970dbfc5-9f21-4adc-93c3-9d9175826a48-public-tls-certs\") pod \"neutron-56b7c77497-mp97q\" (UID: \"970dbfc5-9f21-4adc-93c3-9d9175826a48\") " pod="openstack/neutron-56b7c77497-mp97q" Feb 19 10:27:33 crc kubenswrapper[4811]: I0219 10:27:33.942672 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2w5g\" (UniqueName: \"kubernetes.io/projected/970dbfc5-9f21-4adc-93c3-9d9175826a48-kube-api-access-c2w5g\") pod \"neutron-56b7c77497-mp97q\" (UID: \"970dbfc5-9f21-4adc-93c3-9d9175826a48\") " pod="openstack/neutron-56b7c77497-mp97q" Feb 19 10:27:33 crc kubenswrapper[4811]: I0219 10:27:33.942773 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/970dbfc5-9f21-4adc-93c3-9d9175826a48-combined-ca-bundle\") pod \"neutron-56b7c77497-mp97q\" (UID: \"970dbfc5-9f21-4adc-93c3-9d9175826a48\") " pod="openstack/neutron-56b7c77497-mp97q" Feb 19 10:27:33 crc kubenswrapper[4811]: I0219 10:27:33.957146 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-f5hrk"] Feb 19 10:27:33 crc kubenswrapper[4811]: I0219 10:27:33.994091 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588","Type":"ContainerStarted","Data":"547021b0d40211c2b33c1ba7ee266ae0a21bf9a2f2a29f13e31fb1f0f4f0c451"} Feb 19 10:27:33 crc kubenswrapper[4811]: I0219 10:27:33.997013 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-f5hrk" event={"ID":"4500564b-03cc-41f6-9c64-8601e8e279b4","Type":"ContainerStarted","Data":"b151de5c0e60588d0d56aca4b12a477d32dd2ff0aec4563a274262af967a3a4f"} Feb 19 10:27:33 crc kubenswrapper[4811]: I0219 10:27:33.998287 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-lwl9w" podUID="15311a54-a0df-4c3d-b58f-384839b49b34" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.132:5353: i/o timeout" Feb 19 10:27:34 crc kubenswrapper[4811]: I0219 10:27:34.005824 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"abf84b4a-531a-423e-8ca0-6813a3581f98","Type":"ContainerStarted","Data":"a7373cd91be8f33a46bde09c8047879a008c92b102f926ec17342bbe955f5e40"} Feb 19 10:27:34 crc kubenswrapper[4811]: I0219 10:27:34.012056 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-928sb" event={"ID":"f091a50a-3df7-47c9-84a9-02c074138c7b","Type":"ContainerStarted","Data":"11d9e5d0cb4f616ae262209e47e6a4b07b90751f3880e3a7cc497879bc440c97"} Feb 19 10:27:34 crc kubenswrapper[4811]: I0219 10:27:34.022203 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-766dfcbd56-lqbrt" event={"ID":"4ab45e9b-6cfc-48a3-b603-f47d47bbd510","Type":"ContainerStarted","Data":"6130ebc3a1b7ccf94b7c9699df9cf15b0e03b26cbeecfcc5bd6d0b4b43ad391a"} Feb 19 10:27:34 crc kubenswrapper[4811]: I0219 10:27:34.036944 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kdq88" event={"ID":"81ad234d-f37a-45e3-9b76-c45aabd896ae","Type":"ContainerStarted","Data":"1e17e6fea553904e4a65f751dc0d964e361f6f8f9e2704803a39a128ee87276a"} Feb 19 10:27:34 crc kubenswrapper[4811]: I0219 10:27:34.037741 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-928sb" podStartSLOduration=7.593866503 podStartE2EDuration="39.03772155s" podCreationTimestamp="2026-02-19 10:26:55 +0000 UTC" firstStartedPulling="2026-02-19 10:26:57.542949257 +0000 UTC m=+1106.069413943" lastFinishedPulling="2026-02-19 10:27:28.986804294 +0000 UTC m=+1137.513268990" observedRunningTime="2026-02-19 10:27:34.036641792 +0000 UTC m=+1142.563106468" watchObservedRunningTime="2026-02-19 10:27:34.03772155 +0000 UTC m=+1142.564186236" Feb 19 10:27:34 crc kubenswrapper[4811]: I0219 10:27:34.042417 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-pd4kq" event={"ID":"634e68c8-29a9-4f99-836b-ebc30d51b40d","Type":"ContainerStarted","Data":"42b1ae242914d67bf2e0a9664e3ea80226498ee452eac08c8723802342e9e58c"} Feb 19 10:27:34 crc kubenswrapper[4811]: I0219 10:27:34.044423 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/970dbfc5-9f21-4adc-93c3-9d9175826a48-config\") pod \"neutron-56b7c77497-mp97q\" (UID: \"970dbfc5-9f21-4adc-93c3-9d9175826a48\") " pod="openstack/neutron-56b7c77497-mp97q" Feb 19 10:27:34 crc kubenswrapper[4811]: I0219 10:27:34.044521 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/970dbfc5-9f21-4adc-93c3-9d9175826a48-httpd-config\") pod \"neutron-56b7c77497-mp97q\" (UID: \"970dbfc5-9f21-4adc-93c3-9d9175826a48\") " pod="openstack/neutron-56b7c77497-mp97q" Feb 19 10:27:34 crc kubenswrapper[4811]: I0219 10:27:34.044563 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/970dbfc5-9f21-4adc-93c3-9d9175826a48-public-tls-certs\") pod \"neutron-56b7c77497-mp97q\" (UID: \"970dbfc5-9f21-4adc-93c3-9d9175826a48\") " pod="openstack/neutron-56b7c77497-mp97q" Feb 19 10:27:34 crc kubenswrapper[4811]: I0219 10:27:34.044591 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2w5g\" (UniqueName: \"kubernetes.io/projected/970dbfc5-9f21-4adc-93c3-9d9175826a48-kube-api-access-c2w5g\") pod \"neutron-56b7c77497-mp97q\" (UID: \"970dbfc5-9f21-4adc-93c3-9d9175826a48\") " pod="openstack/neutron-56b7c77497-mp97q" Feb 19 10:27:34 crc kubenswrapper[4811]: I0219 10:27:34.044614 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/970dbfc5-9f21-4adc-93c3-9d9175826a48-combined-ca-bundle\") pod \"neutron-56b7c77497-mp97q\" (UID: \"970dbfc5-9f21-4adc-93c3-9d9175826a48\") " pod="openstack/neutron-56b7c77497-mp97q" Feb 19 10:27:34 crc kubenswrapper[4811]: I0219 10:27:34.044636 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/970dbfc5-9f21-4adc-93c3-9d9175826a48-ovndb-tls-certs\") pod \"neutron-56b7c77497-mp97q\" (UID: \"970dbfc5-9f21-4adc-93c3-9d9175826a48\") " pod="openstack/neutron-56b7c77497-mp97q" Feb 19 10:27:34 crc kubenswrapper[4811]: I0219 10:27:34.044674 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/970dbfc5-9f21-4adc-93c3-9d9175826a48-internal-tls-certs\") pod \"neutron-56b7c77497-mp97q\" (UID: \"970dbfc5-9f21-4adc-93c3-9d9175826a48\") " pod="openstack/neutron-56b7c77497-mp97q" Feb 19 10:27:34 crc kubenswrapper[4811]: I0219 10:27:34.051863 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/970dbfc5-9f21-4adc-93c3-9d9175826a48-internal-tls-certs\") pod \"neutron-56b7c77497-mp97q\" (UID: \"970dbfc5-9f21-4adc-93c3-9d9175826a48\") " pod="openstack/neutron-56b7c77497-mp97q" Feb 19 10:27:34 crc kubenswrapper[4811]: I0219 10:27:34.052124 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-69cb98ff68-7rhp9" event={"ID":"fb83c4af-999b-46bc-89b8-f0909ccb3f14","Type":"ContainerStarted","Data":"71f292254258bfc16cf264c93d0014d0e3e157465a849f4efa69bb43cef05f40"} Feb 19 10:27:34 crc kubenswrapper[4811]: I0219 10:27:34.054294 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-866fcfcb4f-gbvbw" event={"ID":"dd777ce0-fb21-405b-8619-19318d2057d4","Type":"ContainerStarted","Data":"27752c0075ae06c4e0dfe96f1e0a71dfccd33c957ff396423472768ce2e99343"} Feb 19 10:27:34 crc kubenswrapper[4811]: I0219 10:27:34.054716 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/970dbfc5-9f21-4adc-93c3-9d9175826a48-public-tls-certs\") pod \"neutron-56b7c77497-mp97q\" (UID: \"970dbfc5-9f21-4adc-93c3-9d9175826a48\") " pod="openstack/neutron-56b7c77497-mp97q" Feb 19 10:27:34 crc kubenswrapper[4811]: I0219 10:27:34.058396 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/970dbfc5-9f21-4adc-93c3-9d9175826a48-config\") pod \"neutron-56b7c77497-mp97q\" (UID: \"970dbfc5-9f21-4adc-93c3-9d9175826a48\") " pod="openstack/neutron-56b7c77497-mp97q" Feb 19 10:27:34 crc kubenswrapper[4811]: I0219 10:27:34.063433 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/970dbfc5-9f21-4adc-93c3-9d9175826a48-ovndb-tls-certs\") pod \"neutron-56b7c77497-mp97q\" (UID: \"970dbfc5-9f21-4adc-93c3-9d9175826a48\") " pod="openstack/neutron-56b7c77497-mp97q" Feb 19 10:27:34 crc kubenswrapper[4811]: I0219 10:27:34.068088 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/970dbfc5-9f21-4adc-93c3-9d9175826a48-httpd-config\") pod \"neutron-56b7c77497-mp97q\" (UID: \"970dbfc5-9f21-4adc-93c3-9d9175826a48\") " pod="openstack/neutron-56b7c77497-mp97q" Feb 19 10:27:34 crc kubenswrapper[4811]: I0219 10:27:34.072877 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2w5g\" (UniqueName: \"kubernetes.io/projected/970dbfc5-9f21-4adc-93c3-9d9175826a48-kube-api-access-c2w5g\") pod \"neutron-56b7c77497-mp97q\" (UID: \"970dbfc5-9f21-4adc-93c3-9d9175826a48\") " pod="openstack/neutron-56b7c77497-mp97q" Feb 19 10:27:34 crc kubenswrapper[4811]: I0219 10:27:34.075639 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/970dbfc5-9f21-4adc-93c3-9d9175826a48-combined-ca-bundle\") pod \"neutron-56b7c77497-mp97q\" (UID: \"970dbfc5-9f21-4adc-93c3-9d9175826a48\") " pod="openstack/neutron-56b7c77497-mp97q" Feb 19 10:27:34 crc kubenswrapper[4811]: I0219 10:27:34.213768 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56b7c77497-mp97q" Feb 19 10:27:35 crc kubenswrapper[4811]: I0219 10:27:34.959477 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-56b7c77497-mp97q"] Feb 19 10:27:35 crc kubenswrapper[4811]: W0219 10:27:35.008682 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod970dbfc5_9f21_4adc_93c3_9d9175826a48.slice/crio-342dbddacb1f02f565b2406119f71b5fc221436913c0e196a103e83b7b6a11da WatchSource:0}: Error finding container 342dbddacb1f02f565b2406119f71b5fc221436913c0e196a103e83b7b6a11da: Status 404 returned error can't find the container with id 342dbddacb1f02f565b2406119f71b5fc221436913c0e196a103e83b7b6a11da Feb 19 10:27:35 crc kubenswrapper[4811]: I0219 10:27:35.087756 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-69cb98ff68-7rhp9" event={"ID":"fb83c4af-999b-46bc-89b8-f0909ccb3f14","Type":"ContainerStarted","Data":"0502e3b3a8d1675000ba21adcaca40bffc785f9c9f085592a799ff25edc21f9f"} Feb 19 10:27:35 crc kubenswrapper[4811]: I0219 10:27:35.112598 4811 generic.go:334] "Generic (PLEG): container finished" podID="4500564b-03cc-41f6-9c64-8601e8e279b4" containerID="c9c1705b94a03717e17c8fb55608cbaf1df37636b5424789940567829b899856" exitCode=0 Feb 19 10:27:35 crc kubenswrapper[4811]: I0219 10:27:35.112687 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-f5hrk" event={"ID":"4500564b-03cc-41f6-9c64-8601e8e279b4","Type":"ContainerDied","Data":"c9c1705b94a03717e17c8fb55608cbaf1df37636b5424789940567829b899856"} Feb 19 10:27:35 crc kubenswrapper[4811]: I0219 10:27:35.122704 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kdq88" event={"ID":"81ad234d-f37a-45e3-9b76-c45aabd896ae","Type":"ContainerStarted","Data":"8cbdb804e3ecadb37f188b25f92c263fc49bebd42bd0e8b38cb234c97a2a2529"} Feb 19 10:27:35 crc kubenswrapper[4811]: I0219 10:27:35.138811 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56b7c77497-mp97q" event={"ID":"970dbfc5-9f21-4adc-93c3-9d9175826a48","Type":"ContainerStarted","Data":"342dbddacb1f02f565b2406119f71b5fc221436913c0e196a103e83b7b6a11da"} Feb 19 10:27:35 crc kubenswrapper[4811]: I0219 10:27:35.159256 4811 generic.go:334] "Generic (PLEG): container finished" podID="634e68c8-29a9-4f99-836b-ebc30d51b40d" containerID="27af6fff8e3cacca8b80ec5fe542b5db99ea6e0502c2506ba82546e5d3463988" exitCode=0 Feb 19 10:27:35 crc kubenswrapper[4811]: I0219 10:27:35.159318 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-pd4kq" event={"ID":"634e68c8-29a9-4f99-836b-ebc30d51b40d","Type":"ContainerDied","Data":"27af6fff8e3cacca8b80ec5fe542b5db99ea6e0502c2506ba82546e5d3463988"} Feb 19 10:27:35 crc kubenswrapper[4811]: I0219 10:27:35.176528 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-kdq88" podStartSLOduration=20.176501659 podStartE2EDuration="20.176501659s" podCreationTimestamp="2026-02-19 10:27:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:27:35.169732266 +0000 UTC m=+1143.696196962" watchObservedRunningTime="2026-02-19 10:27:35.176501659 +0000 UTC m=+1143.702966365" Feb 19 10:27:35 crc kubenswrapper[4811]: I0219 10:27:35.225486 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-676dcb6496-tnhmt" event={"ID":"19d17501-8d5f-4f9d-8fad-94f48224e910","Type":"ContainerStarted","Data":"3156af09a62658676d66d9a902df8c7b995da32cc4c4fb049d66a9d5276fa9a4"} Feb 19 10:27:35 crc kubenswrapper[4811]: I0219 10:27:35.225519 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-676dcb6496-tnhmt" event={"ID":"19d17501-8d5f-4f9d-8fad-94f48224e910","Type":"ContainerStarted","Data":"5a16e131f450bc97f82488f219a10d5724d67e75f6e3bf313d83f1ef56ec9b87"} Feb 19 10:27:35 crc kubenswrapper[4811]: I0219 10:27:35.263872 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-647dcdfd8f-skgg2" event={"ID":"a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f","Type":"ContainerStarted","Data":"bd44d6e798942ef5235730eadb2917eab2b0dd23d35aaca76621948a90248aee"} Feb 19 10:27:35 crc kubenswrapper[4811]: I0219 10:27:35.263946 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-647dcdfd8f-skgg2" event={"ID":"a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f","Type":"ContainerStarted","Data":"23d2b84ba24aea742228eab42ecf5fd2bdbe39819a604fa3819182dc1bb5d929"} Feb 19 10:27:35 crc kubenswrapper[4811]: I0219 10:27:35.264174 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-647dcdfd8f-skgg2" podUID="a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f" containerName="horizon-log" containerID="cri-o://23d2b84ba24aea742228eab42ecf5fd2bdbe39819a604fa3819182dc1bb5d929" gracePeriod=30 Feb 19 10:27:35 crc kubenswrapper[4811]: I0219 10:27:35.265065 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-647dcdfd8f-skgg2" podUID="a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f" containerName="horizon" containerID="cri-o://bd44d6e798942ef5235730eadb2917eab2b0dd23d35aaca76621948a90248aee" gracePeriod=30 Feb 19 10:27:35 crc kubenswrapper[4811]: I0219 10:27:35.270697 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-676dcb6496-tnhmt" podStartSLOduration=30.270673555 podStartE2EDuration="30.270673555s" podCreationTimestamp="2026-02-19 10:27:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:27:35.255355823 +0000 UTC m=+1143.781820509" watchObservedRunningTime="2026-02-19 10:27:35.270673555 +0000 UTC m=+1143.797138241" Feb 19 10:27:35 crc kubenswrapper[4811]: I0219 10:27:35.299770 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-866fcfcb4f-gbvbw" event={"ID":"dd777ce0-fb21-405b-8619-19318d2057d4","Type":"ContainerStarted","Data":"7f3a208464629bad2499f8d8bd5c3337d9e47eca5699cb3aadf34f1aa0266468"} Feb 19 10:27:35 crc kubenswrapper[4811]: I0219 10:27:35.299955 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-866fcfcb4f-gbvbw" podUID="dd777ce0-fb21-405b-8619-19318d2057d4" containerName="horizon-log" containerID="cri-o://27752c0075ae06c4e0dfe96f1e0a71dfccd33c957ff396423472768ce2e99343" gracePeriod=30 Feb 19 10:27:35 crc kubenswrapper[4811]: I0219 10:27:35.300391 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-866fcfcb4f-gbvbw" podUID="dd777ce0-fb21-405b-8619-19318d2057d4" containerName="horizon" containerID="cri-o://7f3a208464629bad2499f8d8bd5c3337d9e47eca5699cb3aadf34f1aa0266468" gracePeriod=30 Feb 19 10:27:35 crc kubenswrapper[4811]: I0219 10:27:35.312131 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-647dcdfd8f-skgg2" podStartSLOduration=5.143385878 podStartE2EDuration="37.312111353s" podCreationTimestamp="2026-02-19 10:26:58 +0000 UTC" firstStartedPulling="2026-02-19 10:27:00.193525775 +0000 UTC m=+1108.719990461" lastFinishedPulling="2026-02-19 10:27:32.36225125 +0000 UTC m=+1140.888715936" observedRunningTime="2026-02-19 10:27:35.299805159 +0000 UTC m=+1143.826269865" watchObservedRunningTime="2026-02-19 10:27:35.312111353 +0000 UTC m=+1143.838576039" Feb 19 10:27:35 crc kubenswrapper[4811]: I0219 10:27:35.319823 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45fe4f69-83fa-452c-a5fd-c993af1d0ef9","Type":"ContainerStarted","Data":"18a9e2fd451f5340195bb7151c0bee0f554bcef0f59794396c92edc82a588cb5"} Feb 19 10:27:35 crc kubenswrapper[4811]: I0219 10:27:35.328049 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-766dfcbd56-lqbrt" event={"ID":"4ab45e9b-6cfc-48a3-b603-f47d47bbd510","Type":"ContainerStarted","Data":"73cc025a1567957d2afc71121863816e1f341f3be74bf2a4abb8ce0ce5ccf057"} Feb 19 10:27:35 crc kubenswrapper[4811]: I0219 10:27:35.334005 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"abf84b4a-531a-423e-8ca0-6813a3581f98","Type":"ContainerStarted","Data":"74cb02bf9eb707e68faddccc91d5b0d41d4b0afecc4261a2e60b1d7d43f79acf"} Feb 19 10:27:35 crc kubenswrapper[4811]: I0219 10:27:35.373048 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-866fcfcb4f-gbvbw" podStartSLOduration=7.542858383 podStartE2EDuration="39.373009959s" podCreationTimestamp="2026-02-19 10:26:56 +0000 UTC" firstStartedPulling="2026-02-19 10:26:57.820979899 +0000 UTC m=+1106.347444585" lastFinishedPulling="2026-02-19 10:27:29.651131475 +0000 UTC m=+1138.177596161" observedRunningTime="2026-02-19 10:27:35.330830861 +0000 UTC m=+1143.857295547" watchObservedRunningTime="2026-02-19 10:27:35.373009959 +0000 UTC m=+1143.899474645" Feb 19 10:27:35 crc kubenswrapper[4811]: I0219 10:27:35.391203 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-766dfcbd56-lqbrt" podStartSLOduration=30.391183093 podStartE2EDuration="30.391183093s" podCreationTimestamp="2026-02-19 10:27:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:27:35.355092571 +0000 UTC m=+1143.881557247" watchObservedRunningTime="2026-02-19 10:27:35.391183093 +0000 UTC m=+1143.917647779" Feb 19 10:27:35 crc kubenswrapper[4811]: I0219 10:27:35.409587 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-766dfcbd56-lqbrt" Feb 19 10:27:35 crc kubenswrapper[4811]: I0219 10:27:35.409803 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-766dfcbd56-lqbrt" Feb 19 10:27:35 crc kubenswrapper[4811]: I0219 10:27:35.577999 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-676dcb6496-tnhmt" Feb 19 10:27:35 crc kubenswrapper[4811]: I0219 10:27:35.578036 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-676dcb6496-tnhmt" Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.183489 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-pd4kq" Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.296184 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/634e68c8-29a9-4f99-836b-ebc30d51b40d-dns-swift-storage-0\") pod \"634e68c8-29a9-4f99-836b-ebc30d51b40d\" (UID: \"634e68c8-29a9-4f99-836b-ebc30d51b40d\") " Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.296297 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2jvx\" (UniqueName: \"kubernetes.io/projected/634e68c8-29a9-4f99-836b-ebc30d51b40d-kube-api-access-b2jvx\") pod \"634e68c8-29a9-4f99-836b-ebc30d51b40d\" (UID: \"634e68c8-29a9-4f99-836b-ebc30d51b40d\") " Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.296459 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/634e68c8-29a9-4f99-836b-ebc30d51b40d-config\") pod \"634e68c8-29a9-4f99-836b-ebc30d51b40d\" (UID: \"634e68c8-29a9-4f99-836b-ebc30d51b40d\") " Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.296488 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/634e68c8-29a9-4f99-836b-ebc30d51b40d-dns-svc\") pod \"634e68c8-29a9-4f99-836b-ebc30d51b40d\" (UID: \"634e68c8-29a9-4f99-836b-ebc30d51b40d\") " Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.296560 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/634e68c8-29a9-4f99-836b-ebc30d51b40d-ovsdbserver-nb\") pod \"634e68c8-29a9-4f99-836b-ebc30d51b40d\" (UID: \"634e68c8-29a9-4f99-836b-ebc30d51b40d\") " Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.296638 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/634e68c8-29a9-4f99-836b-ebc30d51b40d-ovsdbserver-sb\") pod \"634e68c8-29a9-4f99-836b-ebc30d51b40d\" (UID: \"634e68c8-29a9-4f99-836b-ebc30d51b40d\") " Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.311990 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/634e68c8-29a9-4f99-836b-ebc30d51b40d-kube-api-access-b2jvx" (OuterVolumeSpecName: "kube-api-access-b2jvx") pod "634e68c8-29a9-4f99-836b-ebc30d51b40d" (UID: "634e68c8-29a9-4f99-836b-ebc30d51b40d"). InnerVolumeSpecName "kube-api-access-b2jvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.325894 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/634e68c8-29a9-4f99-836b-ebc30d51b40d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "634e68c8-29a9-4f99-836b-ebc30d51b40d" (UID: "634e68c8-29a9-4f99-836b-ebc30d51b40d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.341893 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/634e68c8-29a9-4f99-836b-ebc30d51b40d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "634e68c8-29a9-4f99-836b-ebc30d51b40d" (UID: "634e68c8-29a9-4f99-836b-ebc30d51b40d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.360549 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-pd4kq" event={"ID":"634e68c8-29a9-4f99-836b-ebc30d51b40d","Type":"ContainerDied","Data":"42b1ae242914d67bf2e0a9664e3ea80226498ee452eac08c8723802342e9e58c"} Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.360610 4811 scope.go:117] "RemoveContainer" containerID="27af6fff8e3cacca8b80ec5fe542b5db99ea6e0502c2506ba82546e5d3463988" Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.360687 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/634e68c8-29a9-4f99-836b-ebc30d51b40d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "634e68c8-29a9-4f99-836b-ebc30d51b40d" (UID: "634e68c8-29a9-4f99-836b-ebc30d51b40d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.360752 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-pd4kq" Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.375699 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588","Type":"ContainerStarted","Data":"a089b40ac68fe826ef59036e8d818b159f860bf480980361f6d048b3f9c6e3ff"} Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.385428 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-f5hrk" event={"ID":"4500564b-03cc-41f6-9c64-8601e8e279b4","Type":"ContainerStarted","Data":"bbec92bc055f0a5cf43754d2e5274ae1f5fabf734e051fa78ed7e176bc3ac2a3"} Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.386643 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-f5hrk" Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.398637 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/634e68c8-29a9-4f99-836b-ebc30d51b40d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.398964 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/634e68c8-29a9-4f99-836b-ebc30d51b40d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.398981 4811 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/634e68c8-29a9-4f99-836b-ebc30d51b40d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.398996 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2jvx\" (UniqueName: \"kubernetes.io/projected/634e68c8-29a9-4f99-836b-ebc30d51b40d-kube-api-access-b2jvx\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.404043 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/634e68c8-29a9-4f99-836b-ebc30d51b40d-config" (OuterVolumeSpecName: "config") pod "634e68c8-29a9-4f99-836b-ebc30d51b40d" (UID: "634e68c8-29a9-4f99-836b-ebc30d51b40d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.407135 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"abf84b4a-531a-423e-8ca0-6813a3581f98","Type":"ContainerStarted","Data":"366812883bda9205007983f5df7023e01b10a7765acd19f53772c25269fd291a"} Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.407497 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="abf84b4a-531a-423e-8ca0-6813a3581f98" containerName="glance-log" containerID="cri-o://74cb02bf9eb707e68faddccc91d5b0d41d4b0afecc4261a2e60b1d7d43f79acf" gracePeriod=30 Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.407693 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="abf84b4a-531a-423e-8ca0-6813a3581f98" containerName="glance-httpd" containerID="cri-o://366812883bda9205007983f5df7023e01b10a7765acd19f53772c25269fd291a" gracePeriod=30 Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.411812 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/634e68c8-29a9-4f99-836b-ebc30d51b40d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "634e68c8-29a9-4f99-836b-ebc30d51b40d" (UID: "634e68c8-29a9-4f99-836b-ebc30d51b40d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.422989 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-f5hrk" podStartSLOduration=5.42297733 podStartE2EDuration="5.42297733s" podCreationTimestamp="2026-02-19 10:27:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:27:36.406956741 +0000 UTC m=+1144.933421427" watchObservedRunningTime="2026-02-19 10:27:36.42297733 +0000 UTC m=+1144.949442016" Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.431586 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-69cb98ff68-7rhp9" event={"ID":"fb83c4af-999b-46bc-89b8-f0909ccb3f14","Type":"ContainerStarted","Data":"e4551f7eded09ebc9e05e93272a8f42cb6e80ec43d0de70c28e4b442b192d33e"} Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.432539 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-69cb98ff68-7rhp9" Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.445482 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=21.445460485 podStartE2EDuration="21.445460485s" podCreationTimestamp="2026-02-19 10:27:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:27:36.441061372 +0000 UTC m=+1144.967526048" watchObservedRunningTime="2026-02-19 10:27:36.445460485 +0000 UTC m=+1144.971925171" Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.445835 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-766dfcbd56-lqbrt" event={"ID":"4ab45e9b-6cfc-48a3-b603-f47d47bbd510","Type":"ContainerStarted","Data":"e88cbd6cffe02dfbcfdeed3f6b27917bd198051bdb2fc9298ecc4f011b2d45af"} Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.468410 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56b7c77497-mp97q" event={"ID":"970dbfc5-9f21-4adc-93c3-9d9175826a48","Type":"ContainerStarted","Data":"2c49e7d8650b0afbd2ceea68b5f3ccd33145f22c54a4211f108d54b5a21d51e0"} Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.468495 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56b7c77497-mp97q" event={"ID":"970dbfc5-9f21-4adc-93c3-9d9175826a48","Type":"ContainerStarted","Data":"ed71657de0d32430a5f99fcead929e1d48acaf0662b544a9dc5e99bcdea7df3f"} Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.468519 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-56b7c77497-mp97q" Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.510017 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/634e68c8-29a9-4f99-836b-ebc30d51b40d-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.510061 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/634e68c8-29a9-4f99-836b-ebc30d51b40d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.532583 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-56b7c77497-mp97q" podStartSLOduration=3.532434656 podStartE2EDuration="3.532434656s" podCreationTimestamp="2026-02-19 10:27:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:27:36.517952097 +0000 UTC m=+1145.044416783" watchObservedRunningTime="2026-02-19 10:27:36.532434656 +0000 UTC m=+1145.058899342" Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.546331 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-69cb98ff68-7rhp9" podStartSLOduration=5.54629018 podStartE2EDuration="5.54629018s" podCreationTimestamp="2026-02-19 10:27:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:27:36.495019361 +0000 UTC m=+1145.021484057" watchObservedRunningTime="2026-02-19 10:27:36.54629018 +0000 UTC m=+1145.072754856" Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.722536 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-pd4kq"] Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.726339 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-pd4kq"] Feb 19 10:27:36 crc kubenswrapper[4811]: I0219 10:27:36.938195 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-866fcfcb4f-gbvbw" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.262138 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.328142 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"abf84b4a-531a-423e-8ca0-6813a3581f98\" (UID: \"abf84b4a-531a-423e-8ca0-6813a3581f98\") " Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.328238 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g652j\" (UniqueName: \"kubernetes.io/projected/abf84b4a-531a-423e-8ca0-6813a3581f98-kube-api-access-g652j\") pod \"abf84b4a-531a-423e-8ca0-6813a3581f98\" (UID: \"abf84b4a-531a-423e-8ca0-6813a3581f98\") " Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.328259 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abf84b4a-531a-423e-8ca0-6813a3581f98-logs\") pod \"abf84b4a-531a-423e-8ca0-6813a3581f98\" (UID: \"abf84b4a-531a-423e-8ca0-6813a3581f98\") " Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.328279 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abf84b4a-531a-423e-8ca0-6813a3581f98-config-data\") pod \"abf84b4a-531a-423e-8ca0-6813a3581f98\" (UID: \"abf84b4a-531a-423e-8ca0-6813a3581f98\") " Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.328378 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abf84b4a-531a-423e-8ca0-6813a3581f98-combined-ca-bundle\") pod \"abf84b4a-531a-423e-8ca0-6813a3581f98\" (UID: \"abf84b4a-531a-423e-8ca0-6813a3581f98\") " Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.328410 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/abf84b4a-531a-423e-8ca0-6813a3581f98-httpd-run\") pod \"abf84b4a-531a-423e-8ca0-6813a3581f98\" (UID: \"abf84b4a-531a-423e-8ca0-6813a3581f98\") " Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.328575 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abf84b4a-531a-423e-8ca0-6813a3581f98-scripts\") pod \"abf84b4a-531a-423e-8ca0-6813a3581f98\" (UID: \"abf84b4a-531a-423e-8ca0-6813a3581f98\") " Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.329075 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abf84b4a-531a-423e-8ca0-6813a3581f98-logs" (OuterVolumeSpecName: "logs") pod "abf84b4a-531a-423e-8ca0-6813a3581f98" (UID: "abf84b4a-531a-423e-8ca0-6813a3581f98"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.336670 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abf84b4a-531a-423e-8ca0-6813a3581f98-scripts" (OuterVolumeSpecName: "scripts") pod "abf84b4a-531a-423e-8ca0-6813a3581f98" (UID: "abf84b4a-531a-423e-8ca0-6813a3581f98"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.336881 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abf84b4a-531a-423e-8ca0-6813a3581f98-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "abf84b4a-531a-423e-8ca0-6813a3581f98" (UID: "abf84b4a-531a-423e-8ca0-6813a3581f98"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.337681 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "abf84b4a-531a-423e-8ca0-6813a3581f98" (UID: "abf84b4a-531a-423e-8ca0-6813a3581f98"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.342676 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abf84b4a-531a-423e-8ca0-6813a3581f98-kube-api-access-g652j" (OuterVolumeSpecName: "kube-api-access-g652j") pod "abf84b4a-531a-423e-8ca0-6813a3581f98" (UID: "abf84b4a-531a-423e-8ca0-6813a3581f98"). InnerVolumeSpecName "kube-api-access-g652j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.367577 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abf84b4a-531a-423e-8ca0-6813a3581f98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "abf84b4a-531a-423e-8ca0-6813a3581f98" (UID: "abf84b4a-531a-423e-8ca0-6813a3581f98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.391298 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abf84b4a-531a-423e-8ca0-6813a3581f98-config-data" (OuterVolumeSpecName: "config-data") pod "abf84b4a-531a-423e-8ca0-6813a3581f98" (UID: "abf84b4a-531a-423e-8ca0-6813a3581f98"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.431316 4811 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.431382 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g652j\" (UniqueName: \"kubernetes.io/projected/abf84b4a-531a-423e-8ca0-6813a3581f98-kube-api-access-g652j\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.431395 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abf84b4a-531a-423e-8ca0-6813a3581f98-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.431403 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abf84b4a-531a-423e-8ca0-6813a3581f98-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.431412 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abf84b4a-531a-423e-8ca0-6813a3581f98-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.431421 4811 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/abf84b4a-531a-423e-8ca0-6813a3581f98-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.431429 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abf84b4a-531a-423e-8ca0-6813a3581f98-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.449481 4811 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.499409 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588","Type":"ContainerStarted","Data":"b2f8c6e3df6d7800ee5555278b522c3e749424daf1d5ca754f808b7eaf6e1dc0"} Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.499515 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ba1dc49e-b10b-46dc-bfb9-1bb5320e8588" containerName="glance-log" containerID="cri-o://a089b40ac68fe826ef59036e8d818b159f860bf480980361f6d048b3f9c6e3ff" gracePeriod=30 Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.499573 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ba1dc49e-b10b-46dc-bfb9-1bb5320e8588" containerName="glance-httpd" containerID="cri-o://b2f8c6e3df6d7800ee5555278b522c3e749424daf1d5ca754f808b7eaf6e1dc0" gracePeriod=30 Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.515365 4811 generic.go:334] "Generic (PLEG): container finished" podID="abf84b4a-531a-423e-8ca0-6813a3581f98" containerID="366812883bda9205007983f5df7023e01b10a7765acd19f53772c25269fd291a" exitCode=143 Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.515431 4811 generic.go:334] "Generic (PLEG): container finished" podID="abf84b4a-531a-423e-8ca0-6813a3581f98" containerID="74cb02bf9eb707e68faddccc91d5b0d41d4b0afecc4261a2e60b1d7d43f79acf" exitCode=143 Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.515464 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"abf84b4a-531a-423e-8ca0-6813a3581f98","Type":"ContainerDied","Data":"366812883bda9205007983f5df7023e01b10a7765acd19f53772c25269fd291a"} Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.515506 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"abf84b4a-531a-423e-8ca0-6813a3581f98","Type":"ContainerDied","Data":"74cb02bf9eb707e68faddccc91d5b0d41d4b0afecc4261a2e60b1d7d43f79acf"} Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.515522 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"abf84b4a-531a-423e-8ca0-6813a3581f98","Type":"ContainerDied","Data":"a7373cd91be8f33a46bde09c8047879a008c92b102f926ec17342bbe955f5e40"} Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.515544 4811 scope.go:117] "RemoveContainer" containerID="366812883bda9205007983f5df7023e01b10a7765acd19f53772c25269fd291a" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.515681 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.521421 4811 generic.go:334] "Generic (PLEG): container finished" podID="f091a50a-3df7-47c9-84a9-02c074138c7b" containerID="11d9e5d0cb4f616ae262209e47e6a4b07b90751f3880e3a7cc497879bc440c97" exitCode=0 Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.523734 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-928sb" event={"ID":"f091a50a-3df7-47c9-84a9-02c074138c7b","Type":"ContainerDied","Data":"11d9e5d0cb4f616ae262209e47e6a4b07b90751f3880e3a7cc497879bc440c97"} Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.532657 4811 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.558600 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=22.55858117 podStartE2EDuration="22.55858117s" podCreationTimestamp="2026-02-19 10:27:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:27:37.530383969 +0000 UTC m=+1146.056848675" watchObservedRunningTime="2026-02-19 10:27:37.55858117 +0000 UTC m=+1146.085045846" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.578489 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.592539 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.595385 4811 scope.go:117] "RemoveContainer" containerID="74cb02bf9eb707e68faddccc91d5b0d41d4b0afecc4261a2e60b1d7d43f79acf" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.601174 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:27:37 crc kubenswrapper[4811]: E0219 10:27:37.601873 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abf84b4a-531a-423e-8ca0-6813a3581f98" containerName="glance-httpd" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.601913 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="abf84b4a-531a-423e-8ca0-6813a3581f98" containerName="glance-httpd" Feb 19 10:27:37 crc kubenswrapper[4811]: E0219 10:27:37.601932 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abf84b4a-531a-423e-8ca0-6813a3581f98" containerName="glance-log" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.601937 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="abf84b4a-531a-423e-8ca0-6813a3581f98" containerName="glance-log" Feb 19 10:27:37 crc kubenswrapper[4811]: E0219 10:27:37.601949 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="634e68c8-29a9-4f99-836b-ebc30d51b40d" containerName="init" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.601955 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="634e68c8-29a9-4f99-836b-ebc30d51b40d" containerName="init" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.602152 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="abf84b4a-531a-423e-8ca0-6813a3581f98" containerName="glance-httpd" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.602196 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="634e68c8-29a9-4f99-836b-ebc30d51b40d" containerName="init" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.602208 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="abf84b4a-531a-423e-8ca0-6813a3581f98" containerName="glance-log" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.603110 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.608209 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.610158 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.615027 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.683379 4811 scope.go:117] "RemoveContainer" containerID="366812883bda9205007983f5df7023e01b10a7765acd19f53772c25269fd291a" Feb 19 10:27:37 crc kubenswrapper[4811]: E0219 10:27:37.684019 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"366812883bda9205007983f5df7023e01b10a7765acd19f53772c25269fd291a\": container with ID starting with 366812883bda9205007983f5df7023e01b10a7765acd19f53772c25269fd291a not found: ID does not exist" containerID="366812883bda9205007983f5df7023e01b10a7765acd19f53772c25269fd291a" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.684062 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"366812883bda9205007983f5df7023e01b10a7765acd19f53772c25269fd291a"} err="failed to get container status \"366812883bda9205007983f5df7023e01b10a7765acd19f53772c25269fd291a\": rpc error: code = NotFound desc = could not find container \"366812883bda9205007983f5df7023e01b10a7765acd19f53772c25269fd291a\": container with ID starting with 366812883bda9205007983f5df7023e01b10a7765acd19f53772c25269fd291a not found: ID does not exist" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.684091 4811 scope.go:117] "RemoveContainer" containerID="74cb02bf9eb707e68faddccc91d5b0d41d4b0afecc4261a2e60b1d7d43f79acf" Feb 19 10:27:37 crc kubenswrapper[4811]: E0219 10:27:37.684380 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74cb02bf9eb707e68faddccc91d5b0d41d4b0afecc4261a2e60b1d7d43f79acf\": container with ID starting with 74cb02bf9eb707e68faddccc91d5b0d41d4b0afecc4261a2e60b1d7d43f79acf not found: ID does not exist" containerID="74cb02bf9eb707e68faddccc91d5b0d41d4b0afecc4261a2e60b1d7d43f79acf" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.684405 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74cb02bf9eb707e68faddccc91d5b0d41d4b0afecc4261a2e60b1d7d43f79acf"} err="failed to get container status \"74cb02bf9eb707e68faddccc91d5b0d41d4b0afecc4261a2e60b1d7d43f79acf\": rpc error: code = NotFound desc = could not find container \"74cb02bf9eb707e68faddccc91d5b0d41d4b0afecc4261a2e60b1d7d43f79acf\": container with ID starting with 74cb02bf9eb707e68faddccc91d5b0d41d4b0afecc4261a2e60b1d7d43f79acf not found: ID does not exist" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.684423 4811 scope.go:117] "RemoveContainer" containerID="366812883bda9205007983f5df7023e01b10a7765acd19f53772c25269fd291a" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.685653 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"366812883bda9205007983f5df7023e01b10a7765acd19f53772c25269fd291a"} err="failed to get container status \"366812883bda9205007983f5df7023e01b10a7765acd19f53772c25269fd291a\": rpc error: code = NotFound desc = could not find container \"366812883bda9205007983f5df7023e01b10a7765acd19f53772c25269fd291a\": container with ID starting with 366812883bda9205007983f5df7023e01b10a7765acd19f53772c25269fd291a not found: ID does not exist" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.685704 4811 scope.go:117] "RemoveContainer" containerID="74cb02bf9eb707e68faddccc91d5b0d41d4b0afecc4261a2e60b1d7d43f79acf" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.686031 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74cb02bf9eb707e68faddccc91d5b0d41d4b0afecc4261a2e60b1d7d43f79acf"} err="failed to get container status \"74cb02bf9eb707e68faddccc91d5b0d41d4b0afecc4261a2e60b1d7d43f79acf\": rpc error: code = NotFound desc = could not find container \"74cb02bf9eb707e68faddccc91d5b0d41d4b0afecc4261a2e60b1d7d43f79acf\": container with ID starting with 74cb02bf9eb707e68faddccc91d5b0d41d4b0afecc4261a2e60b1d7d43f79acf not found: ID does not exist" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.740904 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ddb9c94-9f78-447b-b65d-93cd67081226-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8ddb9c94-9f78-447b-b65d-93cd67081226\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.740968 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ddb9c94-9f78-447b-b65d-93cd67081226-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8ddb9c94-9f78-447b-b65d-93cd67081226\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.741009 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ddb9c94-9f78-447b-b65d-93cd67081226-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8ddb9c94-9f78-447b-b65d-93cd67081226\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.741058 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ddb9c94-9f78-447b-b65d-93cd67081226-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8ddb9c94-9f78-447b-b65d-93cd67081226\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.741090 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6tqh\" (UniqueName: \"kubernetes.io/projected/8ddb9c94-9f78-447b-b65d-93cd67081226-kube-api-access-s6tqh\") pod \"glance-default-internal-api-0\" (UID: \"8ddb9c94-9f78-447b-b65d-93cd67081226\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.741149 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ddb9c94-9f78-447b-b65d-93cd67081226-logs\") pod \"glance-default-internal-api-0\" (UID: \"8ddb9c94-9f78-447b-b65d-93cd67081226\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.741221 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8ddb9c94-9f78-447b-b65d-93cd67081226-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8ddb9c94-9f78-447b-b65d-93cd67081226\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.741246 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"8ddb9c94-9f78-447b-b65d-93cd67081226\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.843927 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ddb9c94-9f78-447b-b65d-93cd67081226-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8ddb9c94-9f78-447b-b65d-93cd67081226\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.844006 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ddb9c94-9f78-447b-b65d-93cd67081226-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8ddb9c94-9f78-447b-b65d-93cd67081226\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.844065 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ddb9c94-9f78-447b-b65d-93cd67081226-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8ddb9c94-9f78-447b-b65d-93cd67081226\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.844115 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ddb9c94-9f78-447b-b65d-93cd67081226-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8ddb9c94-9f78-447b-b65d-93cd67081226\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.844160 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6tqh\" (UniqueName: \"kubernetes.io/projected/8ddb9c94-9f78-447b-b65d-93cd67081226-kube-api-access-s6tqh\") pod \"glance-default-internal-api-0\" (UID: \"8ddb9c94-9f78-447b-b65d-93cd67081226\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.844252 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ddb9c94-9f78-447b-b65d-93cd67081226-logs\") pod \"glance-default-internal-api-0\" (UID: \"8ddb9c94-9f78-447b-b65d-93cd67081226\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.844327 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8ddb9c94-9f78-447b-b65d-93cd67081226-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8ddb9c94-9f78-447b-b65d-93cd67081226\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.844358 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"8ddb9c94-9f78-447b-b65d-93cd67081226\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.845442 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"8ddb9c94-9f78-447b-b65d-93cd67081226\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.845763 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ddb9c94-9f78-447b-b65d-93cd67081226-logs\") pod \"glance-default-internal-api-0\" (UID: \"8ddb9c94-9f78-447b-b65d-93cd67081226\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.845787 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8ddb9c94-9f78-447b-b65d-93cd67081226-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8ddb9c94-9f78-447b-b65d-93cd67081226\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.851421 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ddb9c94-9f78-447b-b65d-93cd67081226-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8ddb9c94-9f78-447b-b65d-93cd67081226\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.853154 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ddb9c94-9f78-447b-b65d-93cd67081226-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8ddb9c94-9f78-447b-b65d-93cd67081226\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.863132 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ddb9c94-9f78-447b-b65d-93cd67081226-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8ddb9c94-9f78-447b-b65d-93cd67081226\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.864260 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ddb9c94-9f78-447b-b65d-93cd67081226-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8ddb9c94-9f78-447b-b65d-93cd67081226\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.875257 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6tqh\" (UniqueName: \"kubernetes.io/projected/8ddb9c94-9f78-447b-b65d-93cd67081226-kube-api-access-s6tqh\") pod \"glance-default-internal-api-0\" (UID: \"8ddb9c94-9f78-447b-b65d-93cd67081226\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.905858 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"8ddb9c94-9f78-447b-b65d-93cd67081226\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:27:37 crc kubenswrapper[4811]: I0219 10:27:37.992357 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 10:27:38 crc kubenswrapper[4811]: I0219 10:27:38.539909 4811 generic.go:334] "Generic (PLEG): container finished" podID="ba1dc49e-b10b-46dc-bfb9-1bb5320e8588" containerID="b2f8c6e3df6d7800ee5555278b522c3e749424daf1d5ca754f808b7eaf6e1dc0" exitCode=0 Feb 19 10:27:38 crc kubenswrapper[4811]: I0219 10:27:38.539937 4811 generic.go:334] "Generic (PLEG): container finished" podID="ba1dc49e-b10b-46dc-bfb9-1bb5320e8588" containerID="a089b40ac68fe826ef59036e8d818b159f860bf480980361f6d048b3f9c6e3ff" exitCode=143 Feb 19 10:27:38 crc kubenswrapper[4811]: I0219 10:27:38.540118 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588","Type":"ContainerDied","Data":"b2f8c6e3df6d7800ee5555278b522c3e749424daf1d5ca754f808b7eaf6e1dc0"} Feb 19 10:27:38 crc kubenswrapper[4811]: I0219 10:27:38.540171 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588","Type":"ContainerDied","Data":"a089b40ac68fe826ef59036e8d818b159f860bf480980361f6d048b3f9c6e3ff"} Feb 19 10:27:38 crc kubenswrapper[4811]: I0219 10:27:38.595190 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="634e68c8-29a9-4f99-836b-ebc30d51b40d" path="/var/lib/kubelet/pods/634e68c8-29a9-4f99-836b-ebc30d51b40d/volumes" Feb 19 10:27:38 crc kubenswrapper[4811]: I0219 10:27:38.596053 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abf84b4a-531a-423e-8ca0-6813a3581f98" path="/var/lib/kubelet/pods/abf84b4a-531a-423e-8ca0-6813a3581f98/volumes" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.139700 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-928sb" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.161784 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.304548 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba1dc49e-b10b-46dc-bfb9-1bb5320e8588-logs\") pod \"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588\" (UID: \"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588\") " Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.304636 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f091a50a-3df7-47c9-84a9-02c074138c7b-scripts\") pod \"f091a50a-3df7-47c9-84a9-02c074138c7b\" (UID: \"f091a50a-3df7-47c9-84a9-02c074138c7b\") " Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.304679 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba1dc49e-b10b-46dc-bfb9-1bb5320e8588-config-data\") pod \"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588\" (UID: \"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588\") " Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.304710 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f091a50a-3df7-47c9-84a9-02c074138c7b-combined-ca-bundle\") pod \"f091a50a-3df7-47c9-84a9-02c074138c7b\" (UID: \"f091a50a-3df7-47c9-84a9-02c074138c7b\") " Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.304806 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba1dc49e-b10b-46dc-bfb9-1bb5320e8588-scripts\") pod \"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588\" (UID: \"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588\") " Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.304840 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588\" (UID: \"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588\") " Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.304882 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcvs4\" (UniqueName: \"kubernetes.io/projected/f091a50a-3df7-47c9-84a9-02c074138c7b-kube-api-access-gcvs4\") pod \"f091a50a-3df7-47c9-84a9-02c074138c7b\" (UID: \"f091a50a-3df7-47c9-84a9-02c074138c7b\") " Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.304950 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f091a50a-3df7-47c9-84a9-02c074138c7b-logs\") pod \"f091a50a-3df7-47c9-84a9-02c074138c7b\" (UID: \"f091a50a-3df7-47c9-84a9-02c074138c7b\") " Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.304996 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94zd8\" (UniqueName: \"kubernetes.io/projected/ba1dc49e-b10b-46dc-bfb9-1bb5320e8588-kube-api-access-94zd8\") pod \"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588\" (UID: \"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588\") " Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.305034 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f091a50a-3df7-47c9-84a9-02c074138c7b-config-data\") pod \"f091a50a-3df7-47c9-84a9-02c074138c7b\" (UID: \"f091a50a-3df7-47c9-84a9-02c074138c7b\") " Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.305071 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ba1dc49e-b10b-46dc-bfb9-1bb5320e8588-httpd-run\") pod \"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588\" (UID: \"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588\") " Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.305095 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba1dc49e-b10b-46dc-bfb9-1bb5320e8588-combined-ca-bundle\") pod \"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588\" (UID: \"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588\") " Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.306931 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f091a50a-3df7-47c9-84a9-02c074138c7b-logs" (OuterVolumeSpecName: "logs") pod "f091a50a-3df7-47c9-84a9-02c074138c7b" (UID: "f091a50a-3df7-47c9-84a9-02c074138c7b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.307114 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba1dc49e-b10b-46dc-bfb9-1bb5320e8588-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ba1dc49e-b10b-46dc-bfb9-1bb5320e8588" (UID: "ba1dc49e-b10b-46dc-bfb9-1bb5320e8588"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.308045 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba1dc49e-b10b-46dc-bfb9-1bb5320e8588-logs" (OuterVolumeSpecName: "logs") pod "ba1dc49e-b10b-46dc-bfb9-1bb5320e8588" (UID: "ba1dc49e-b10b-46dc-bfb9-1bb5320e8588"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.312650 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "ba1dc49e-b10b-46dc-bfb9-1bb5320e8588" (UID: "ba1dc49e-b10b-46dc-bfb9-1bb5320e8588"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.318634 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba1dc49e-b10b-46dc-bfb9-1bb5320e8588-scripts" (OuterVolumeSpecName: "scripts") pod "ba1dc49e-b10b-46dc-bfb9-1bb5320e8588" (UID: "ba1dc49e-b10b-46dc-bfb9-1bb5320e8588"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.319031 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba1dc49e-b10b-46dc-bfb9-1bb5320e8588-kube-api-access-94zd8" (OuterVolumeSpecName: "kube-api-access-94zd8") pod "ba1dc49e-b10b-46dc-bfb9-1bb5320e8588" (UID: "ba1dc49e-b10b-46dc-bfb9-1bb5320e8588"). InnerVolumeSpecName "kube-api-access-94zd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.319040 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f091a50a-3df7-47c9-84a9-02c074138c7b-kube-api-access-gcvs4" (OuterVolumeSpecName: "kube-api-access-gcvs4") pod "f091a50a-3df7-47c9-84a9-02c074138c7b" (UID: "f091a50a-3df7-47c9-84a9-02c074138c7b"). InnerVolumeSpecName "kube-api-access-gcvs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.330607 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f091a50a-3df7-47c9-84a9-02c074138c7b-scripts" (OuterVolumeSpecName: "scripts") pod "f091a50a-3df7-47c9-84a9-02c074138c7b" (UID: "f091a50a-3df7-47c9-84a9-02c074138c7b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.357669 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.392213 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f091a50a-3df7-47c9-84a9-02c074138c7b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f091a50a-3df7-47c9-84a9-02c074138c7b" (UID: "f091a50a-3df7-47c9-84a9-02c074138c7b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.397676 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba1dc49e-b10b-46dc-bfb9-1bb5320e8588-config-data" (OuterVolumeSpecName: "config-data") pod "ba1dc49e-b10b-46dc-bfb9-1bb5320e8588" (UID: "ba1dc49e-b10b-46dc-bfb9-1bb5320e8588"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.407994 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f091a50a-3df7-47c9-84a9-02c074138c7b-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.408033 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba1dc49e-b10b-46dc-bfb9-1bb5320e8588-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.408048 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f091a50a-3df7-47c9-84a9-02c074138c7b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.408064 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba1dc49e-b10b-46dc-bfb9-1bb5320e8588-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.408092 4811 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.408105 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcvs4\" (UniqueName: \"kubernetes.io/projected/f091a50a-3df7-47c9-84a9-02c074138c7b-kube-api-access-gcvs4\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.408118 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f091a50a-3df7-47c9-84a9-02c074138c7b-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.408129 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94zd8\" (UniqueName: \"kubernetes.io/projected/ba1dc49e-b10b-46dc-bfb9-1bb5320e8588-kube-api-access-94zd8\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.408142 4811 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ba1dc49e-b10b-46dc-bfb9-1bb5320e8588-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.408155 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba1dc49e-b10b-46dc-bfb9-1bb5320e8588-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.414653 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba1dc49e-b10b-46dc-bfb9-1bb5320e8588-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba1dc49e-b10b-46dc-bfb9-1bb5320e8588" (UID: "ba1dc49e-b10b-46dc-bfb9-1bb5320e8588"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.421720 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f091a50a-3df7-47c9-84a9-02c074138c7b-config-data" (OuterVolumeSpecName: "config-data") pod "f091a50a-3df7-47c9-84a9-02c074138c7b" (UID: "f091a50a-3df7-47c9-84a9-02c074138c7b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.436091 4811 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.508265 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-647dcdfd8f-skgg2" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.509649 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f091a50a-3df7-47c9-84a9-02c074138c7b-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.509679 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba1dc49e-b10b-46dc-bfb9-1bb5320e8588-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.509693 4811 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.564616 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-928sb" event={"ID":"f091a50a-3df7-47c9-84a9-02c074138c7b","Type":"ContainerDied","Data":"e82f858ed5cd4f38a8cf29808c7371efe8f1abd06ac01199c5157194b0171beb"} Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.565080 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e82f858ed5cd4f38a8cf29808c7371efe8f1abd06ac01199c5157194b0171beb" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.564913 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-928sb" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.582710 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ba1dc49e-b10b-46dc-bfb9-1bb5320e8588","Type":"ContainerDied","Data":"547021b0d40211c2b33c1ba7ee266ae0a21bf9a2f2a29f13e31fb1f0f4f0c451"} Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.582755 4811 scope.go:117] "RemoveContainer" containerID="b2f8c6e3df6d7800ee5555278b522c3e749424daf1d5ca754f808b7eaf6e1dc0" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.582799 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.640435 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.657259 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.666798 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:27:39 crc kubenswrapper[4811]: E0219 10:27:39.667295 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba1dc49e-b10b-46dc-bfb9-1bb5320e8588" containerName="glance-httpd" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.667316 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba1dc49e-b10b-46dc-bfb9-1bb5320e8588" containerName="glance-httpd" Feb 19 10:27:39 crc kubenswrapper[4811]: E0219 10:27:39.667342 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba1dc49e-b10b-46dc-bfb9-1bb5320e8588" containerName="glance-log" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.667351 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba1dc49e-b10b-46dc-bfb9-1bb5320e8588" containerName="glance-log" Feb 19 10:27:39 crc kubenswrapper[4811]: E0219 10:27:39.667363 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f091a50a-3df7-47c9-84a9-02c074138c7b" containerName="placement-db-sync" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.667370 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f091a50a-3df7-47c9-84a9-02c074138c7b" containerName="placement-db-sync" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.667602 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba1dc49e-b10b-46dc-bfb9-1bb5320e8588" containerName="glance-log" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.667653 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f091a50a-3df7-47c9-84a9-02c074138c7b" containerName="placement-db-sync" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.667669 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba1dc49e-b10b-46dc-bfb9-1bb5320e8588" containerName="glance-httpd" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.668737 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.671269 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.673815 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.680197 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.783734 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-68f45d64cb-4stgq"] Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.785897 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-68f45d64cb-4stgq" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.795066 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-tdb5v" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.795564 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.795731 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.795870 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.796028 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.825801 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"d3322db9-30c9-4f53-af57-f5a45c7f9950\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.825909 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3322db9-30c9-4f53-af57-f5a45c7f9950-logs\") pod \"glance-default-external-api-0\" (UID: \"d3322db9-30c9-4f53-af57-f5a45c7f9950\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.826032 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d3322db9-30c9-4f53-af57-f5a45c7f9950-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d3322db9-30c9-4f53-af57-f5a45c7f9950\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.826088 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3322db9-30c9-4f53-af57-f5a45c7f9950-config-data\") pod \"glance-default-external-api-0\" (UID: \"d3322db9-30c9-4f53-af57-f5a45c7f9950\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.826117 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3322db9-30c9-4f53-af57-f5a45c7f9950-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d3322db9-30c9-4f53-af57-f5a45c7f9950\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.826168 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3322db9-30c9-4f53-af57-f5a45c7f9950-scripts\") pod \"glance-default-external-api-0\" (UID: \"d3322db9-30c9-4f53-af57-f5a45c7f9950\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.826194 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3322db9-30c9-4f53-af57-f5a45c7f9950-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d3322db9-30c9-4f53-af57-f5a45c7f9950\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.826259 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp7pb\" (UniqueName: \"kubernetes.io/projected/d3322db9-30c9-4f53-af57-f5a45c7f9950-kube-api-access-pp7pb\") pod \"glance-default-external-api-0\" (UID: \"d3322db9-30c9-4f53-af57-f5a45c7f9950\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.858202 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-68f45d64cb-4stgq"] Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.934626 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3322db9-30c9-4f53-af57-f5a45c7f9950-scripts\") pod \"glance-default-external-api-0\" (UID: \"d3322db9-30c9-4f53-af57-f5a45c7f9950\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.934889 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3322db9-30c9-4f53-af57-f5a45c7f9950-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d3322db9-30c9-4f53-af57-f5a45c7f9950\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.935109 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a92ef88-05db-43fe-a0bb-1d188afbae59-scripts\") pod \"placement-68f45d64cb-4stgq\" (UID: \"1a92ef88-05db-43fe-a0bb-1d188afbae59\") " pod="openstack/placement-68f45d64cb-4stgq" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.935152 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp7pb\" (UniqueName: \"kubernetes.io/projected/d3322db9-30c9-4f53-af57-f5a45c7f9950-kube-api-access-pp7pb\") pod \"glance-default-external-api-0\" (UID: \"d3322db9-30c9-4f53-af57-f5a45c7f9950\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.935236 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"d3322db9-30c9-4f53-af57-f5a45c7f9950\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.935272 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a92ef88-05db-43fe-a0bb-1d188afbae59-public-tls-certs\") pod \"placement-68f45d64cb-4stgq\" (UID: \"1a92ef88-05db-43fe-a0bb-1d188afbae59\") " pod="openstack/placement-68f45d64cb-4stgq" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.935335 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6b8f\" (UniqueName: \"kubernetes.io/projected/1a92ef88-05db-43fe-a0bb-1d188afbae59-kube-api-access-p6b8f\") pod \"placement-68f45d64cb-4stgq\" (UID: \"1a92ef88-05db-43fe-a0bb-1d188afbae59\") " pod="openstack/placement-68f45d64cb-4stgq" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.935485 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3322db9-30c9-4f53-af57-f5a45c7f9950-logs\") pod \"glance-default-external-api-0\" (UID: \"d3322db9-30c9-4f53-af57-f5a45c7f9950\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.935546 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a92ef88-05db-43fe-a0bb-1d188afbae59-combined-ca-bundle\") pod \"placement-68f45d64cb-4stgq\" (UID: \"1a92ef88-05db-43fe-a0bb-1d188afbae59\") " pod="openstack/placement-68f45d64cb-4stgq" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.935747 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a92ef88-05db-43fe-a0bb-1d188afbae59-config-data\") pod \"placement-68f45d64cb-4stgq\" (UID: \"1a92ef88-05db-43fe-a0bb-1d188afbae59\") " pod="openstack/placement-68f45d64cb-4stgq" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.935809 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d3322db9-30c9-4f53-af57-f5a45c7f9950-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d3322db9-30c9-4f53-af57-f5a45c7f9950\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.935848 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3322db9-30c9-4f53-af57-f5a45c7f9950-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d3322db9-30c9-4f53-af57-f5a45c7f9950\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.935882 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3322db9-30c9-4f53-af57-f5a45c7f9950-config-data\") pod \"glance-default-external-api-0\" (UID: \"d3322db9-30c9-4f53-af57-f5a45c7f9950\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.935916 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a92ef88-05db-43fe-a0bb-1d188afbae59-internal-tls-certs\") pod \"placement-68f45d64cb-4stgq\" (UID: \"1a92ef88-05db-43fe-a0bb-1d188afbae59\") " pod="openstack/placement-68f45d64cb-4stgq" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.935960 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a92ef88-05db-43fe-a0bb-1d188afbae59-logs\") pod \"placement-68f45d64cb-4stgq\" (UID: \"1a92ef88-05db-43fe-a0bb-1d188afbae59\") " pod="openstack/placement-68f45d64cb-4stgq" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.936466 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"d3322db9-30c9-4f53-af57-f5a45c7f9950\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.937176 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3322db9-30c9-4f53-af57-f5a45c7f9950-logs\") pod \"glance-default-external-api-0\" (UID: \"d3322db9-30c9-4f53-af57-f5a45c7f9950\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.938684 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d3322db9-30c9-4f53-af57-f5a45c7f9950-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d3322db9-30c9-4f53-af57-f5a45c7f9950\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.946823 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3322db9-30c9-4f53-af57-f5a45c7f9950-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d3322db9-30c9-4f53-af57-f5a45c7f9950\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.949934 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3322db9-30c9-4f53-af57-f5a45c7f9950-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d3322db9-30c9-4f53-af57-f5a45c7f9950\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.950991 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3322db9-30c9-4f53-af57-f5a45c7f9950-config-data\") pod \"glance-default-external-api-0\" (UID: \"d3322db9-30c9-4f53-af57-f5a45c7f9950\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.955636 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3322db9-30c9-4f53-af57-f5a45c7f9950-scripts\") pod \"glance-default-external-api-0\" (UID: \"d3322db9-30c9-4f53-af57-f5a45c7f9950\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:39 crc kubenswrapper[4811]: I0219 10:27:39.968268 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp7pb\" (UniqueName: \"kubernetes.io/projected/d3322db9-30c9-4f53-af57-f5a45c7f9950-kube-api-access-pp7pb\") pod \"glance-default-external-api-0\" (UID: \"d3322db9-30c9-4f53-af57-f5a45c7f9950\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:40 crc kubenswrapper[4811]: I0219 10:27:40.003203 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"d3322db9-30c9-4f53-af57-f5a45c7f9950\") " pod="openstack/glance-default-external-api-0" Feb 19 10:27:40 crc kubenswrapper[4811]: I0219 10:27:40.020974 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 10:27:40 crc kubenswrapper[4811]: I0219 10:27:40.038992 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a92ef88-05db-43fe-a0bb-1d188afbae59-scripts\") pod \"placement-68f45d64cb-4stgq\" (UID: \"1a92ef88-05db-43fe-a0bb-1d188afbae59\") " pod="openstack/placement-68f45d64cb-4stgq" Feb 19 10:27:40 crc kubenswrapper[4811]: I0219 10:27:40.039079 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a92ef88-05db-43fe-a0bb-1d188afbae59-public-tls-certs\") pod \"placement-68f45d64cb-4stgq\" (UID: \"1a92ef88-05db-43fe-a0bb-1d188afbae59\") " pod="openstack/placement-68f45d64cb-4stgq" Feb 19 10:27:40 crc kubenswrapper[4811]: I0219 10:27:40.039122 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6b8f\" (UniqueName: \"kubernetes.io/projected/1a92ef88-05db-43fe-a0bb-1d188afbae59-kube-api-access-p6b8f\") pod \"placement-68f45d64cb-4stgq\" (UID: \"1a92ef88-05db-43fe-a0bb-1d188afbae59\") " pod="openstack/placement-68f45d64cb-4stgq" Feb 19 10:27:40 crc kubenswrapper[4811]: I0219 10:27:40.039176 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a92ef88-05db-43fe-a0bb-1d188afbae59-combined-ca-bundle\") pod \"placement-68f45d64cb-4stgq\" (UID: \"1a92ef88-05db-43fe-a0bb-1d188afbae59\") " pod="openstack/placement-68f45d64cb-4stgq" Feb 19 10:27:40 crc kubenswrapper[4811]: I0219 10:27:40.039257 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a92ef88-05db-43fe-a0bb-1d188afbae59-config-data\") pod \"placement-68f45d64cb-4stgq\" (UID: \"1a92ef88-05db-43fe-a0bb-1d188afbae59\") " pod="openstack/placement-68f45d64cb-4stgq" Feb 19 10:27:40 crc kubenswrapper[4811]: I0219 10:27:40.039296 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a92ef88-05db-43fe-a0bb-1d188afbae59-internal-tls-certs\") pod \"placement-68f45d64cb-4stgq\" (UID: \"1a92ef88-05db-43fe-a0bb-1d188afbae59\") " pod="openstack/placement-68f45d64cb-4stgq" Feb 19 10:27:40 crc kubenswrapper[4811]: I0219 10:27:40.039322 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a92ef88-05db-43fe-a0bb-1d188afbae59-logs\") pod \"placement-68f45d64cb-4stgq\" (UID: \"1a92ef88-05db-43fe-a0bb-1d188afbae59\") " pod="openstack/placement-68f45d64cb-4stgq" Feb 19 10:27:40 crc kubenswrapper[4811]: I0219 10:27:40.040885 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a92ef88-05db-43fe-a0bb-1d188afbae59-logs\") pod \"placement-68f45d64cb-4stgq\" (UID: \"1a92ef88-05db-43fe-a0bb-1d188afbae59\") " pod="openstack/placement-68f45d64cb-4stgq" Feb 19 10:27:40 crc kubenswrapper[4811]: I0219 10:27:40.055427 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a92ef88-05db-43fe-a0bb-1d188afbae59-scripts\") pod \"placement-68f45d64cb-4stgq\" (UID: \"1a92ef88-05db-43fe-a0bb-1d188afbae59\") " pod="openstack/placement-68f45d64cb-4stgq" Feb 19 10:27:40 crc kubenswrapper[4811]: I0219 10:27:40.056048 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a92ef88-05db-43fe-a0bb-1d188afbae59-internal-tls-certs\") pod \"placement-68f45d64cb-4stgq\" (UID: \"1a92ef88-05db-43fe-a0bb-1d188afbae59\") " pod="openstack/placement-68f45d64cb-4stgq" Feb 19 10:27:40 crc kubenswrapper[4811]: I0219 10:27:40.058427 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a92ef88-05db-43fe-a0bb-1d188afbae59-config-data\") pod \"placement-68f45d64cb-4stgq\" (UID: \"1a92ef88-05db-43fe-a0bb-1d188afbae59\") " pod="openstack/placement-68f45d64cb-4stgq" Feb 19 10:27:40 crc kubenswrapper[4811]: I0219 10:27:40.058574 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a92ef88-05db-43fe-a0bb-1d188afbae59-public-tls-certs\") pod \"placement-68f45d64cb-4stgq\" (UID: \"1a92ef88-05db-43fe-a0bb-1d188afbae59\") " pod="openstack/placement-68f45d64cb-4stgq" Feb 19 10:27:40 crc kubenswrapper[4811]: I0219 10:27:40.058711 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a92ef88-05db-43fe-a0bb-1d188afbae59-combined-ca-bundle\") pod \"placement-68f45d64cb-4stgq\" (UID: \"1a92ef88-05db-43fe-a0bb-1d188afbae59\") " pod="openstack/placement-68f45d64cb-4stgq" Feb 19 10:27:40 crc kubenswrapper[4811]: I0219 10:27:40.067227 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6b8f\" (UniqueName: \"kubernetes.io/projected/1a92ef88-05db-43fe-a0bb-1d188afbae59-kube-api-access-p6b8f\") pod \"placement-68f45d64cb-4stgq\" (UID: \"1a92ef88-05db-43fe-a0bb-1d188afbae59\") " pod="openstack/placement-68f45d64cb-4stgq" Feb 19 10:27:40 crc kubenswrapper[4811]: I0219 10:27:40.141232 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-68f45d64cb-4stgq" Feb 19 10:27:40 crc kubenswrapper[4811]: I0219 10:27:40.597530 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba1dc49e-b10b-46dc-bfb9-1bb5320e8588" path="/var/lib/kubelet/pods/ba1dc49e-b10b-46dc-bfb9-1bb5320e8588/volumes" Feb 19 10:27:40 crc kubenswrapper[4811]: I0219 10:27:40.602561 4811 generic.go:334] "Generic (PLEG): container finished" podID="81ad234d-f37a-45e3-9b76-c45aabd896ae" containerID="8cbdb804e3ecadb37f188b25f92c263fc49bebd42bd0e8b38cb234c97a2a2529" exitCode=0 Feb 19 10:27:40 crc kubenswrapper[4811]: I0219 10:27:40.602608 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kdq88" event={"ID":"81ad234d-f37a-45e3-9b76-c45aabd896ae","Type":"ContainerDied","Data":"8cbdb804e3ecadb37f188b25f92c263fc49bebd42bd0e8b38cb234c97a2a2529"} Feb 19 10:27:41 crc kubenswrapper[4811]: I0219 10:27:41.383633 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-f5hrk" Feb 19 10:27:41 crc kubenswrapper[4811]: I0219 10:27:41.526347 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-xzmx7"] Feb 19 10:27:41 crc kubenswrapper[4811]: I0219 10:27:41.526698 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-xzmx7" podUID="b40ac2f3-4c26-49e9-a194-7dfbcae15f46" containerName="dnsmasq-dns" containerID="cri-o://a80f07aa77378e84aa79046cc486097ea8b11e32e8e95bffb4f9416bf026d77e" gracePeriod=10 Feb 19 10:27:42 crc kubenswrapper[4811]: I0219 10:27:42.088032 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-xzmx7" podUID="b40ac2f3-4c26-49e9-a194-7dfbcae15f46" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.142:5353: connect: connection refused" Feb 19 10:27:42 crc kubenswrapper[4811]: I0219 10:27:42.636926 4811 generic.go:334] "Generic (PLEG): container finished" podID="b40ac2f3-4c26-49e9-a194-7dfbcae15f46" containerID="a80f07aa77378e84aa79046cc486097ea8b11e32e8e95bffb4f9416bf026d77e" exitCode=0 Feb 19 10:27:42 crc kubenswrapper[4811]: I0219 10:27:42.636989 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-xzmx7" event={"ID":"b40ac2f3-4c26-49e9-a194-7dfbcae15f46","Type":"ContainerDied","Data":"a80f07aa77378e84aa79046cc486097ea8b11e32e8e95bffb4f9416bf026d77e"} Feb 19 10:27:45 crc kubenswrapper[4811]: I0219 10:27:45.411801 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-766dfcbd56-lqbrt" podUID="4ab45e9b-6cfc-48a3-b603-f47d47bbd510" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.144:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.144:8443: connect: connection refused" Feb 19 10:27:45 crc kubenswrapper[4811]: W0219 10:27:45.439694 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ddb9c94_9f78_447b_b65d_93cd67081226.slice/crio-29924206aa7fae8aac994e8d609b5d90117af2726ef6bce6f8a366dab8474b4a WatchSource:0}: Error finding container 29924206aa7fae8aac994e8d609b5d90117af2726ef6bce6f8a366dab8474b4a: Status 404 returned error can't find the container with id 29924206aa7fae8aac994e8d609b5d90117af2726ef6bce6f8a366dab8474b4a Feb 19 10:27:45 crc kubenswrapper[4811]: I0219 10:27:45.543782 4811 scope.go:117] "RemoveContainer" containerID="a089b40ac68fe826ef59036e8d818b159f860bf480980361f6d048b3f9c6e3ff" Feb 19 10:27:45 crc kubenswrapper[4811]: I0219 10:27:45.584487 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-676dcb6496-tnhmt" podUID="19d17501-8d5f-4f9d-8fad-94f48224e910" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Feb 19 10:27:45 crc kubenswrapper[4811]: I0219 10:27:45.682787 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8ddb9c94-9f78-447b-b65d-93cd67081226","Type":"ContainerStarted","Data":"29924206aa7fae8aac994e8d609b5d90117af2726ef6bce6f8a366dab8474b4a"} Feb 19 10:27:45 crc kubenswrapper[4811]: I0219 10:27:45.693730 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kdq88" event={"ID":"81ad234d-f37a-45e3-9b76-c45aabd896ae","Type":"ContainerDied","Data":"1e17e6fea553904e4a65f751dc0d964e361f6f8f9e2704803a39a128ee87276a"} Feb 19 10:27:45 crc kubenswrapper[4811]: I0219 10:27:45.693774 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e17e6fea553904e4a65f751dc0d964e361f6f8f9e2704803a39a128ee87276a" Feb 19 10:27:45 crc kubenswrapper[4811]: I0219 10:27:45.747041 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kdq88" Feb 19 10:27:45 crc kubenswrapper[4811]: I0219 10:27:45.892675 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/81ad234d-f37a-45e3-9b76-c45aabd896ae-credential-keys\") pod \"81ad234d-f37a-45e3-9b76-c45aabd896ae\" (UID: \"81ad234d-f37a-45e3-9b76-c45aabd896ae\") " Feb 19 10:27:45 crc kubenswrapper[4811]: I0219 10:27:45.893379 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ad234d-f37a-45e3-9b76-c45aabd896ae-config-data\") pod \"81ad234d-f37a-45e3-9b76-c45aabd896ae\" (UID: \"81ad234d-f37a-45e3-9b76-c45aabd896ae\") " Feb 19 10:27:45 crc kubenswrapper[4811]: I0219 10:27:45.893431 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/81ad234d-f37a-45e3-9b76-c45aabd896ae-fernet-keys\") pod \"81ad234d-f37a-45e3-9b76-c45aabd896ae\" (UID: \"81ad234d-f37a-45e3-9b76-c45aabd896ae\") " Feb 19 10:27:45 crc kubenswrapper[4811]: I0219 10:27:45.893501 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81ad234d-f37a-45e3-9b76-c45aabd896ae-scripts\") pod \"81ad234d-f37a-45e3-9b76-c45aabd896ae\" (UID: \"81ad234d-f37a-45e3-9b76-c45aabd896ae\") " Feb 19 10:27:45 crc kubenswrapper[4811]: I0219 10:27:45.893536 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8c6g5\" (UniqueName: \"kubernetes.io/projected/81ad234d-f37a-45e3-9b76-c45aabd896ae-kube-api-access-8c6g5\") pod \"81ad234d-f37a-45e3-9b76-c45aabd896ae\" (UID: \"81ad234d-f37a-45e3-9b76-c45aabd896ae\") " Feb 19 10:27:45 crc kubenswrapper[4811]: I0219 10:27:45.893562 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ad234d-f37a-45e3-9b76-c45aabd896ae-combined-ca-bundle\") pod \"81ad234d-f37a-45e3-9b76-c45aabd896ae\" (UID: \"81ad234d-f37a-45e3-9b76-c45aabd896ae\") " Feb 19 10:27:45 crc kubenswrapper[4811]: I0219 10:27:45.950640 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ad234d-f37a-45e3-9b76-c45aabd896ae-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "81ad234d-f37a-45e3-9b76-c45aabd896ae" (UID: "81ad234d-f37a-45e3-9b76-c45aabd896ae"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:45 crc kubenswrapper[4811]: I0219 10:27:45.992441 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ad234d-f37a-45e3-9b76-c45aabd896ae-scripts" (OuterVolumeSpecName: "scripts") pod "81ad234d-f37a-45e3-9b76-c45aabd896ae" (UID: "81ad234d-f37a-45e3-9b76-c45aabd896ae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:45 crc kubenswrapper[4811]: I0219 10:27:45.992593 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81ad234d-f37a-45e3-9b76-c45aabd896ae-kube-api-access-8c6g5" (OuterVolumeSpecName: "kube-api-access-8c6g5") pod "81ad234d-f37a-45e3-9b76-c45aabd896ae" (UID: "81ad234d-f37a-45e3-9b76-c45aabd896ae"). InnerVolumeSpecName "kube-api-access-8c6g5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:27:45 crc kubenswrapper[4811]: I0219 10:27:45.992758 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ad234d-f37a-45e3-9b76-c45aabd896ae-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "81ad234d-f37a-45e3-9b76-c45aabd896ae" (UID: "81ad234d-f37a-45e3-9b76-c45aabd896ae"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:45 crc kubenswrapper[4811]: I0219 10:27:45.995572 4811 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/81ad234d-f37a-45e3-9b76-c45aabd896ae-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:45 crc kubenswrapper[4811]: I0219 10:27:45.995599 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81ad234d-f37a-45e3-9b76-c45aabd896ae-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:45 crc kubenswrapper[4811]: I0219 10:27:45.995610 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8c6g5\" (UniqueName: \"kubernetes.io/projected/81ad234d-f37a-45e3-9b76-c45aabd896ae-kube-api-access-8c6g5\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:45 crc kubenswrapper[4811]: I0219 10:27:45.995623 4811 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/81ad234d-f37a-45e3-9b76-c45aabd896ae-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.154090 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-xzmx7" Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.158218 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ad234d-f37a-45e3-9b76-c45aabd896ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81ad234d-f37a-45e3-9b76-c45aabd896ae" (UID: "81ad234d-f37a-45e3-9b76-c45aabd896ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.193688 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ad234d-f37a-45e3-9b76-c45aabd896ae-config-data" (OuterVolumeSpecName: "config-data") pod "81ad234d-f37a-45e3-9b76-c45aabd896ae" (UID: "81ad234d-f37a-45e3-9b76-c45aabd896ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.214053 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ad234d-f37a-45e3-9b76-c45aabd896ae-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.214084 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ad234d-f37a-45e3-9b76-c45aabd896ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.316002 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvjn7\" (UniqueName: \"kubernetes.io/projected/b40ac2f3-4c26-49e9-a194-7dfbcae15f46-kube-api-access-wvjn7\") pod \"b40ac2f3-4c26-49e9-a194-7dfbcae15f46\" (UID: \"b40ac2f3-4c26-49e9-a194-7dfbcae15f46\") " Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.316586 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b40ac2f3-4c26-49e9-a194-7dfbcae15f46-ovsdbserver-nb\") pod \"b40ac2f3-4c26-49e9-a194-7dfbcae15f46\" (UID: \"b40ac2f3-4c26-49e9-a194-7dfbcae15f46\") " Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.316615 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b40ac2f3-4c26-49e9-a194-7dfbcae15f46-dns-swift-storage-0\") pod \"b40ac2f3-4c26-49e9-a194-7dfbcae15f46\" (UID: \"b40ac2f3-4c26-49e9-a194-7dfbcae15f46\") " Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.316740 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b40ac2f3-4c26-49e9-a194-7dfbcae15f46-ovsdbserver-sb\") pod \"b40ac2f3-4c26-49e9-a194-7dfbcae15f46\" (UID: \"b40ac2f3-4c26-49e9-a194-7dfbcae15f46\") " Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.316777 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b40ac2f3-4c26-49e9-a194-7dfbcae15f46-config\") pod \"b40ac2f3-4c26-49e9-a194-7dfbcae15f46\" (UID: \"b40ac2f3-4c26-49e9-a194-7dfbcae15f46\") " Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.316800 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b40ac2f3-4c26-49e9-a194-7dfbcae15f46-dns-svc\") pod \"b40ac2f3-4c26-49e9-a194-7dfbcae15f46\" (UID: \"b40ac2f3-4c26-49e9-a194-7dfbcae15f46\") " Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.339509 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b40ac2f3-4c26-49e9-a194-7dfbcae15f46-kube-api-access-wvjn7" (OuterVolumeSpecName: "kube-api-access-wvjn7") pod "b40ac2f3-4c26-49e9-a194-7dfbcae15f46" (UID: "b40ac2f3-4c26-49e9-a194-7dfbcae15f46"). InnerVolumeSpecName "kube-api-access-wvjn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.340155 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-68f45d64cb-4stgq"] Feb 19 10:27:46 crc kubenswrapper[4811]: W0219 10:27:46.393171 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a92ef88_05db_43fe_a0bb_1d188afbae59.slice/crio-05fec9dddb84901c1b5176dede193857309a8b0f1787b9579057119aeaee4aca WatchSource:0}: Error finding container 05fec9dddb84901c1b5176dede193857309a8b0f1787b9579057119aeaee4aca: Status 404 returned error can't find the container with id 05fec9dddb84901c1b5176dede193857309a8b0f1787b9579057119aeaee4aca Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.406204 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b40ac2f3-4c26-49e9-a194-7dfbcae15f46-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b40ac2f3-4c26-49e9-a194-7dfbcae15f46" (UID: "b40ac2f3-4c26-49e9-a194-7dfbcae15f46"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.408957 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b40ac2f3-4c26-49e9-a194-7dfbcae15f46-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b40ac2f3-4c26-49e9-a194-7dfbcae15f46" (UID: "b40ac2f3-4c26-49e9-a194-7dfbcae15f46"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.411204 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b40ac2f3-4c26-49e9-a194-7dfbcae15f46-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b40ac2f3-4c26-49e9-a194-7dfbcae15f46" (UID: "b40ac2f3-4c26-49e9-a194-7dfbcae15f46"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.419819 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b40ac2f3-4c26-49e9-a194-7dfbcae15f46-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.419859 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b40ac2f3-4c26-49e9-a194-7dfbcae15f46-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.419872 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvjn7\" (UniqueName: \"kubernetes.io/projected/b40ac2f3-4c26-49e9-a194-7dfbcae15f46-kube-api-access-wvjn7\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.419887 4811 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b40ac2f3-4c26-49e9-a194-7dfbcae15f46-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.460010 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b40ac2f3-4c26-49e9-a194-7dfbcae15f46-config" (OuterVolumeSpecName: "config") pod "b40ac2f3-4c26-49e9-a194-7dfbcae15f46" (UID: "b40ac2f3-4c26-49e9-a194-7dfbcae15f46"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.474629 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b40ac2f3-4c26-49e9-a194-7dfbcae15f46-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b40ac2f3-4c26-49e9-a194-7dfbcae15f46" (UID: "b40ac2f3-4c26-49e9-a194-7dfbcae15f46"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.521146 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b40ac2f3-4c26-49e9-a194-7dfbcae15f46-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.521171 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b40ac2f3-4c26-49e9-a194-7dfbcae15f46-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.704290 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:27:46 crc kubenswrapper[4811]: W0219 10:27:46.722222 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3322db9_30c9_4f53_af57_f5a45c7f9950.slice/crio-a54451043890b2afafaabd04e380c506090ed088c2130f516b24254f0c0ee27a WatchSource:0}: Error finding container a54451043890b2afafaabd04e380c506090ed088c2130f516b24254f0c0ee27a: Status 404 returned error can't find the container with id a54451043890b2afafaabd04e380c506090ed088c2130f516b24254f0c0ee27a Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.742226 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45fe4f69-83fa-452c-a5fd-c993af1d0ef9","Type":"ContainerStarted","Data":"8c353ca3b78d881375f4a0d811b6e09442da7c08f78b00c1db1d58f0b303f577"} Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.756364 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68f45d64cb-4stgq" event={"ID":"1a92ef88-05db-43fe-a0bb-1d188afbae59","Type":"ContainerStarted","Data":"05fec9dddb84901c1b5176dede193857309a8b0f1787b9579057119aeaee4aca"} Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.783073 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-xzmx7" event={"ID":"b40ac2f3-4c26-49e9-a194-7dfbcae15f46","Type":"ContainerDied","Data":"e29d2e8263c761ea49be2d3f4da53e21dfbcc948cdb4a5f5d9573adafe08e76d"} Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.783174 4811 scope.go:117] "RemoveContainer" containerID="a80f07aa77378e84aa79046cc486097ea8b11e32e8e95bffb4f9416bf026d77e" Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.783441 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-xzmx7" Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.836605 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kdq88" Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.836825 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xggvg" event={"ID":"9741f82c-1dce-4e95-9442-c424bcf8927d","Type":"ContainerStarted","Data":"e41874502470ed1c0b31f888db5b94a42c3cf931b16f7ad49055790bbc466251"} Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.838110 4811 scope.go:117] "RemoveContainer" containerID="2dc31e0f0b6dacbdd8b4c12d9cb52fde490da039fab74a2dc723f9c3521f39a7" Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.848271 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-xzmx7"] Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.861581 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-xzmx7"] Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.876904 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-xggvg" podStartSLOduration=2.724579521 podStartE2EDuration="50.876885183s" podCreationTimestamp="2026-02-19 10:26:56 +0000 UTC" firstStartedPulling="2026-02-19 10:26:57.838753353 +0000 UTC m=+1106.365218029" lastFinishedPulling="2026-02-19 10:27:45.991059005 +0000 UTC m=+1154.517523691" observedRunningTime="2026-02-19 10:27:46.873032585 +0000 UTC m=+1155.399497271" watchObservedRunningTime="2026-02-19 10:27:46.876885183 +0000 UTC m=+1155.403349869" Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.936411 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-54bd9d4f8c-p9wzd"] Feb 19 10:27:46 crc kubenswrapper[4811]: E0219 10:27:46.937090 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b40ac2f3-4c26-49e9-a194-7dfbcae15f46" containerName="init" Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.937121 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="b40ac2f3-4c26-49e9-a194-7dfbcae15f46" containerName="init" Feb 19 10:27:46 crc kubenswrapper[4811]: E0219 10:27:46.937138 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ad234d-f37a-45e3-9b76-c45aabd896ae" containerName="keystone-bootstrap" Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.937149 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ad234d-f37a-45e3-9b76-c45aabd896ae" containerName="keystone-bootstrap" Feb 19 10:27:46 crc kubenswrapper[4811]: E0219 10:27:46.937194 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b40ac2f3-4c26-49e9-a194-7dfbcae15f46" containerName="dnsmasq-dns" Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.937207 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="b40ac2f3-4c26-49e9-a194-7dfbcae15f46" containerName="dnsmasq-dns" Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.937510 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="b40ac2f3-4c26-49e9-a194-7dfbcae15f46" containerName="dnsmasq-dns" Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.937554 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="81ad234d-f37a-45e3-9b76-c45aabd896ae" containerName="keystone-bootstrap" Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.951892 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-54bd9d4f8c-p9wzd" Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.955143 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.955832 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.956084 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.959629 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.959757 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-znphl" Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.971678 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-54bd9d4f8c-p9wzd"] Feb 19 10:27:46 crc kubenswrapper[4811]: I0219 10:27:46.971786 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 10:27:47 crc kubenswrapper[4811]: I0219 10:27:47.035527 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/88141b63-6cd8-4294-a7dc-a2104eb13eb4-fernet-keys\") pod \"keystone-54bd9d4f8c-p9wzd\" (UID: \"88141b63-6cd8-4294-a7dc-a2104eb13eb4\") " pod="openstack/keystone-54bd9d4f8c-p9wzd" Feb 19 10:27:47 crc kubenswrapper[4811]: I0219 10:27:47.035565 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88141b63-6cd8-4294-a7dc-a2104eb13eb4-scripts\") pod \"keystone-54bd9d4f8c-p9wzd\" (UID: \"88141b63-6cd8-4294-a7dc-a2104eb13eb4\") " pod="openstack/keystone-54bd9d4f8c-p9wzd" Feb 19 10:27:47 crc kubenswrapper[4811]: I0219 10:27:47.035593 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/88141b63-6cd8-4294-a7dc-a2104eb13eb4-credential-keys\") pod \"keystone-54bd9d4f8c-p9wzd\" (UID: \"88141b63-6cd8-4294-a7dc-a2104eb13eb4\") " pod="openstack/keystone-54bd9d4f8c-p9wzd" Feb 19 10:27:47 crc kubenswrapper[4811]: I0219 10:27:47.035623 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j95sb\" (UniqueName: \"kubernetes.io/projected/88141b63-6cd8-4294-a7dc-a2104eb13eb4-kube-api-access-j95sb\") pod \"keystone-54bd9d4f8c-p9wzd\" (UID: \"88141b63-6cd8-4294-a7dc-a2104eb13eb4\") " pod="openstack/keystone-54bd9d4f8c-p9wzd" Feb 19 10:27:47 crc kubenswrapper[4811]: I0219 10:27:47.035705 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88141b63-6cd8-4294-a7dc-a2104eb13eb4-public-tls-certs\") pod \"keystone-54bd9d4f8c-p9wzd\" (UID: \"88141b63-6cd8-4294-a7dc-a2104eb13eb4\") " pod="openstack/keystone-54bd9d4f8c-p9wzd" Feb 19 10:27:47 crc kubenswrapper[4811]: I0219 10:27:47.035727 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/88141b63-6cd8-4294-a7dc-a2104eb13eb4-internal-tls-certs\") pod \"keystone-54bd9d4f8c-p9wzd\" (UID: \"88141b63-6cd8-4294-a7dc-a2104eb13eb4\") " pod="openstack/keystone-54bd9d4f8c-p9wzd" Feb 19 10:27:47 crc kubenswrapper[4811]: I0219 10:27:47.035752 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88141b63-6cd8-4294-a7dc-a2104eb13eb4-config-data\") pod \"keystone-54bd9d4f8c-p9wzd\" (UID: \"88141b63-6cd8-4294-a7dc-a2104eb13eb4\") " pod="openstack/keystone-54bd9d4f8c-p9wzd" Feb 19 10:27:47 crc kubenswrapper[4811]: I0219 10:27:47.035799 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88141b63-6cd8-4294-a7dc-a2104eb13eb4-combined-ca-bundle\") pod \"keystone-54bd9d4f8c-p9wzd\" (UID: \"88141b63-6cd8-4294-a7dc-a2104eb13eb4\") " pod="openstack/keystone-54bd9d4f8c-p9wzd" Feb 19 10:27:47 crc kubenswrapper[4811]: I0219 10:27:47.138202 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88141b63-6cd8-4294-a7dc-a2104eb13eb4-public-tls-certs\") pod \"keystone-54bd9d4f8c-p9wzd\" (UID: \"88141b63-6cd8-4294-a7dc-a2104eb13eb4\") " pod="openstack/keystone-54bd9d4f8c-p9wzd" Feb 19 10:27:47 crc kubenswrapper[4811]: I0219 10:27:47.138297 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/88141b63-6cd8-4294-a7dc-a2104eb13eb4-internal-tls-certs\") pod \"keystone-54bd9d4f8c-p9wzd\" (UID: \"88141b63-6cd8-4294-a7dc-a2104eb13eb4\") " pod="openstack/keystone-54bd9d4f8c-p9wzd" Feb 19 10:27:47 crc kubenswrapper[4811]: I0219 10:27:47.138337 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88141b63-6cd8-4294-a7dc-a2104eb13eb4-config-data\") pod \"keystone-54bd9d4f8c-p9wzd\" (UID: \"88141b63-6cd8-4294-a7dc-a2104eb13eb4\") " pod="openstack/keystone-54bd9d4f8c-p9wzd" Feb 19 10:27:47 crc kubenswrapper[4811]: I0219 10:27:47.138411 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88141b63-6cd8-4294-a7dc-a2104eb13eb4-combined-ca-bundle\") pod \"keystone-54bd9d4f8c-p9wzd\" (UID: \"88141b63-6cd8-4294-a7dc-a2104eb13eb4\") " pod="openstack/keystone-54bd9d4f8c-p9wzd" Feb 19 10:27:47 crc kubenswrapper[4811]: I0219 10:27:47.138462 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/88141b63-6cd8-4294-a7dc-a2104eb13eb4-fernet-keys\") pod \"keystone-54bd9d4f8c-p9wzd\" (UID: \"88141b63-6cd8-4294-a7dc-a2104eb13eb4\") " pod="openstack/keystone-54bd9d4f8c-p9wzd" Feb 19 10:27:47 crc kubenswrapper[4811]: I0219 10:27:47.138489 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88141b63-6cd8-4294-a7dc-a2104eb13eb4-scripts\") pod \"keystone-54bd9d4f8c-p9wzd\" (UID: \"88141b63-6cd8-4294-a7dc-a2104eb13eb4\") " pod="openstack/keystone-54bd9d4f8c-p9wzd" Feb 19 10:27:47 crc kubenswrapper[4811]: I0219 10:27:47.138521 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/88141b63-6cd8-4294-a7dc-a2104eb13eb4-credential-keys\") pod \"keystone-54bd9d4f8c-p9wzd\" (UID: \"88141b63-6cd8-4294-a7dc-a2104eb13eb4\") " pod="openstack/keystone-54bd9d4f8c-p9wzd" Feb 19 10:27:47 crc kubenswrapper[4811]: I0219 10:27:47.138559 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j95sb\" (UniqueName: \"kubernetes.io/projected/88141b63-6cd8-4294-a7dc-a2104eb13eb4-kube-api-access-j95sb\") pod \"keystone-54bd9d4f8c-p9wzd\" (UID: \"88141b63-6cd8-4294-a7dc-a2104eb13eb4\") " pod="openstack/keystone-54bd9d4f8c-p9wzd" Feb 19 10:27:47 crc kubenswrapper[4811]: I0219 10:27:47.144994 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88141b63-6cd8-4294-a7dc-a2104eb13eb4-scripts\") pod \"keystone-54bd9d4f8c-p9wzd\" (UID: \"88141b63-6cd8-4294-a7dc-a2104eb13eb4\") " pod="openstack/keystone-54bd9d4f8c-p9wzd" Feb 19 10:27:47 crc kubenswrapper[4811]: I0219 10:27:47.145420 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88141b63-6cd8-4294-a7dc-a2104eb13eb4-public-tls-certs\") pod \"keystone-54bd9d4f8c-p9wzd\" (UID: \"88141b63-6cd8-4294-a7dc-a2104eb13eb4\") " pod="openstack/keystone-54bd9d4f8c-p9wzd" Feb 19 10:27:47 crc kubenswrapper[4811]: I0219 10:27:47.146087 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/88141b63-6cd8-4294-a7dc-a2104eb13eb4-fernet-keys\") pod \"keystone-54bd9d4f8c-p9wzd\" (UID: \"88141b63-6cd8-4294-a7dc-a2104eb13eb4\") " pod="openstack/keystone-54bd9d4f8c-p9wzd" Feb 19 10:27:47 crc kubenswrapper[4811]: I0219 10:27:47.146170 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/88141b63-6cd8-4294-a7dc-a2104eb13eb4-internal-tls-certs\") pod \"keystone-54bd9d4f8c-p9wzd\" (UID: \"88141b63-6cd8-4294-a7dc-a2104eb13eb4\") " pod="openstack/keystone-54bd9d4f8c-p9wzd" Feb 19 10:27:47 crc kubenswrapper[4811]: I0219 10:27:47.150180 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88141b63-6cd8-4294-a7dc-a2104eb13eb4-combined-ca-bundle\") pod \"keystone-54bd9d4f8c-p9wzd\" (UID: \"88141b63-6cd8-4294-a7dc-a2104eb13eb4\") " pod="openstack/keystone-54bd9d4f8c-p9wzd" Feb 19 10:27:47 crc kubenswrapper[4811]: I0219 10:27:47.151882 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/88141b63-6cd8-4294-a7dc-a2104eb13eb4-credential-keys\") pod \"keystone-54bd9d4f8c-p9wzd\" (UID: \"88141b63-6cd8-4294-a7dc-a2104eb13eb4\") " pod="openstack/keystone-54bd9d4f8c-p9wzd" Feb 19 10:27:47 crc kubenswrapper[4811]: I0219 10:27:47.154623 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88141b63-6cd8-4294-a7dc-a2104eb13eb4-config-data\") pod \"keystone-54bd9d4f8c-p9wzd\" (UID: \"88141b63-6cd8-4294-a7dc-a2104eb13eb4\") " pod="openstack/keystone-54bd9d4f8c-p9wzd" Feb 19 10:27:47 crc kubenswrapper[4811]: I0219 10:27:47.166275 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j95sb\" (UniqueName: \"kubernetes.io/projected/88141b63-6cd8-4294-a7dc-a2104eb13eb4-kube-api-access-j95sb\") pod \"keystone-54bd9d4f8c-p9wzd\" (UID: \"88141b63-6cd8-4294-a7dc-a2104eb13eb4\") " pod="openstack/keystone-54bd9d4f8c-p9wzd" Feb 19 10:27:47 crc kubenswrapper[4811]: I0219 10:27:47.291412 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-54bd9d4f8c-p9wzd" Feb 19 10:27:47 crc kubenswrapper[4811]: I0219 10:27:47.866481 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-54bd9d4f8c-p9wzd"] Feb 19 10:27:47 crc kubenswrapper[4811]: I0219 10:27:47.873846 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8ddb9c94-9f78-447b-b65d-93cd67081226","Type":"ContainerStarted","Data":"da8e033020ef4c4bb22c4e2350708e3908597ed93b4c73737cdfd1bd0a73768f"} Feb 19 10:27:47 crc kubenswrapper[4811]: I0219 10:27:47.884439 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68f45d64cb-4stgq" event={"ID":"1a92ef88-05db-43fe-a0bb-1d188afbae59","Type":"ContainerStarted","Data":"d8cb62fd80072f2cbc83755c9a1d2433b446b094759e27ffd2aef4f9f647738a"} Feb 19 10:27:47 crc kubenswrapper[4811]: I0219 10:27:47.884574 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68f45d64cb-4stgq" event={"ID":"1a92ef88-05db-43fe-a0bb-1d188afbae59","Type":"ContainerStarted","Data":"e6fe3a3141d314270a76b0a597b39b2e42ebda56acc5631b73c9ce583e3d92d5"} Feb 19 10:27:47 crc kubenswrapper[4811]: I0219 10:27:47.887713 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-68f45d64cb-4stgq" Feb 19 10:27:47 crc kubenswrapper[4811]: I0219 10:27:47.887763 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-68f45d64cb-4stgq" Feb 19 10:27:47 crc kubenswrapper[4811]: I0219 10:27:47.917885 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rqgzq" event={"ID":"d369ea05-6a36-44fd-bdd0-26eead9ba16d","Type":"ContainerStarted","Data":"998e46efff20dedbb0c454556351271f7191e8f95d11e5a4806b9a68c93a20a5"} Feb 19 10:27:47 crc kubenswrapper[4811]: I0219 10:27:47.923184 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d3322db9-30c9-4f53-af57-f5a45c7f9950","Type":"ContainerStarted","Data":"a54451043890b2afafaabd04e380c506090ed088c2130f516b24254f0c0ee27a"} Feb 19 10:27:47 crc kubenswrapper[4811]: I0219 10:27:47.929170 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-68f45d64cb-4stgq" podStartSLOduration=8.929148214 podStartE2EDuration="8.929148214s" podCreationTimestamp="2026-02-19 10:27:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:27:47.915587987 +0000 UTC m=+1156.442052673" watchObservedRunningTime="2026-02-19 10:27:47.929148214 +0000 UTC m=+1156.455612900" Feb 19 10:27:48 crc kubenswrapper[4811]: I0219 10:27:48.594641 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b40ac2f3-4c26-49e9-a194-7dfbcae15f46" path="/var/lib/kubelet/pods/b40ac2f3-4c26-49e9-a194-7dfbcae15f46/volumes" Feb 19 10:27:49 crc kubenswrapper[4811]: I0219 10:27:49.016105 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-54bd9d4f8c-p9wzd" event={"ID":"88141b63-6cd8-4294-a7dc-a2104eb13eb4","Type":"ContainerStarted","Data":"f8538ecb2cbf9768c77e8e5f61b317c76210ea4dc43926502177c02d888ae6cd"} Feb 19 10:27:49 crc kubenswrapper[4811]: I0219 10:27:49.017396 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-54bd9d4f8c-p9wzd" Feb 19 10:27:49 crc kubenswrapper[4811]: I0219 10:27:49.017434 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-54bd9d4f8c-p9wzd" event={"ID":"88141b63-6cd8-4294-a7dc-a2104eb13eb4","Type":"ContainerStarted","Data":"1d873e72c7ce709b960312222148d951e9a584344f2bdb3010de000ecb0d6be3"} Feb 19 10:27:49 crc kubenswrapper[4811]: I0219 10:27:49.024588 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d3322db9-30c9-4f53-af57-f5a45c7f9950","Type":"ContainerStarted","Data":"fb88796dd5915e8d36949bf388c2c0147a3ae0ad76e34cf9eba67222a6140c0d"} Feb 19 10:27:49 crc kubenswrapper[4811]: I0219 10:27:49.027435 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8ddb9c94-9f78-447b-b65d-93cd67081226","Type":"ContainerStarted","Data":"ec8e61ff34c251df2d0255f1a318fe8613e4a234d6bca515d97b64092128f36c"} Feb 19 10:27:49 crc kubenswrapper[4811]: I0219 10:27:49.042114 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-54bd9d4f8c-p9wzd" podStartSLOduration=3.042099783 podStartE2EDuration="3.042099783s" podCreationTimestamp="2026-02-19 10:27:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:27:49.03847193 +0000 UTC m=+1157.564936626" watchObservedRunningTime="2026-02-19 10:27:49.042099783 +0000 UTC m=+1157.568564469" Feb 19 10:27:49 crc kubenswrapper[4811]: I0219 10:27:49.074351 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-rqgzq" podStartSLOduration=5.216946983 podStartE2EDuration="54.074326486s" podCreationTimestamp="2026-02-19 10:26:55 +0000 UTC" firstStartedPulling="2026-02-19 10:26:57.52074969 +0000 UTC m=+1106.047214366" lastFinishedPulling="2026-02-19 10:27:46.378129183 +0000 UTC m=+1154.904593869" observedRunningTime="2026-02-19 10:27:49.061622872 +0000 UTC m=+1157.588087558" watchObservedRunningTime="2026-02-19 10:27:49.074326486 +0000 UTC m=+1157.600791172" Feb 19 10:27:49 crc kubenswrapper[4811]: I0219 10:27:49.094714 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=12.094696437 podStartE2EDuration="12.094696437s" podCreationTimestamp="2026-02-19 10:27:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:27:49.090996332 +0000 UTC m=+1157.617461018" watchObservedRunningTime="2026-02-19 10:27:49.094696437 +0000 UTC m=+1157.621161123" Feb 19 10:27:50 crc kubenswrapper[4811]: I0219 10:27:50.039142 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d3322db9-30c9-4f53-af57-f5a45c7f9950","Type":"ContainerStarted","Data":"5b1790bb12b9820123c9a97faa6e082b8522f146c8aea05c5ebdb448083c6d18"} Feb 19 10:27:50 crc kubenswrapper[4811]: I0219 10:27:50.066881 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=11.066863101 podStartE2EDuration="11.066863101s" podCreationTimestamp="2026-02-19 10:27:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:27:50.058139778 +0000 UTC m=+1158.584604464" watchObservedRunningTime="2026-02-19 10:27:50.066863101 +0000 UTC m=+1158.593327797" Feb 19 10:27:53 crc kubenswrapper[4811]: I0219 10:27:53.072322 4811 generic.go:334] "Generic (PLEG): container finished" podID="9741f82c-1dce-4e95-9442-c424bcf8927d" containerID="e41874502470ed1c0b31f888db5b94a42c3cf931b16f7ad49055790bbc466251" exitCode=0 Feb 19 10:27:53 crc kubenswrapper[4811]: I0219 10:27:53.072378 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xggvg" event={"ID":"9741f82c-1dce-4e95-9442-c424bcf8927d","Type":"ContainerDied","Data":"e41874502470ed1c0b31f888db5b94a42c3cf931b16f7ad49055790bbc466251"} Feb 19 10:27:55 crc kubenswrapper[4811]: I0219 10:27:55.107496 4811 generic.go:334] "Generic (PLEG): container finished" podID="d369ea05-6a36-44fd-bdd0-26eead9ba16d" containerID="998e46efff20dedbb0c454556351271f7191e8f95d11e5a4806b9a68c93a20a5" exitCode=0 Feb 19 10:27:55 crc kubenswrapper[4811]: I0219 10:27:55.107738 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rqgzq" event={"ID":"d369ea05-6a36-44fd-bdd0-26eead9ba16d","Type":"ContainerDied","Data":"998e46efff20dedbb0c454556351271f7191e8f95d11e5a4806b9a68c93a20a5"} Feb 19 10:27:55 crc kubenswrapper[4811]: I0219 10:27:55.410167 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-766dfcbd56-lqbrt" podUID="4ab45e9b-6cfc-48a3-b603-f47d47bbd510" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.144:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.144:8443: connect: connection refused" Feb 19 10:27:55 crc kubenswrapper[4811]: I0219 10:27:55.579424 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-676dcb6496-tnhmt" podUID="19d17501-8d5f-4f9d-8fad-94f48224e910" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Feb 19 10:27:56 crc kubenswrapper[4811]: I0219 10:27:56.824903 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xggvg" Feb 19 10:27:56 crc kubenswrapper[4811]: I0219 10:27:56.838098 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rqgzq" Feb 19 10:27:56 crc kubenswrapper[4811]: I0219 10:27:56.872188 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9741f82c-1dce-4e95-9442-c424bcf8927d-db-sync-config-data\") pod \"9741f82c-1dce-4e95-9442-c424bcf8927d\" (UID: \"9741f82c-1dce-4e95-9442-c424bcf8927d\") " Feb 19 10:27:56 crc kubenswrapper[4811]: I0219 10:27:56.874987 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d369ea05-6a36-44fd-bdd0-26eead9ba16d-db-sync-config-data\") pod \"d369ea05-6a36-44fd-bdd0-26eead9ba16d\" (UID: \"d369ea05-6a36-44fd-bdd0-26eead9ba16d\") " Feb 19 10:27:56 crc kubenswrapper[4811]: I0219 10:27:56.875255 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7495v\" (UniqueName: \"kubernetes.io/projected/d369ea05-6a36-44fd-bdd0-26eead9ba16d-kube-api-access-7495v\") pod \"d369ea05-6a36-44fd-bdd0-26eead9ba16d\" (UID: \"d369ea05-6a36-44fd-bdd0-26eead9ba16d\") " Feb 19 10:27:56 crc kubenswrapper[4811]: I0219 10:27:56.875297 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d369ea05-6a36-44fd-bdd0-26eead9ba16d-etc-machine-id\") pod \"d369ea05-6a36-44fd-bdd0-26eead9ba16d\" (UID: \"d369ea05-6a36-44fd-bdd0-26eead9ba16d\") " Feb 19 10:27:56 crc kubenswrapper[4811]: I0219 10:27:56.875436 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d369ea05-6a36-44fd-bdd0-26eead9ba16d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d369ea05-6a36-44fd-bdd0-26eead9ba16d" (UID: "d369ea05-6a36-44fd-bdd0-26eead9ba16d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:27:56 crc kubenswrapper[4811]: I0219 10:27:56.875517 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9741f82c-1dce-4e95-9442-c424bcf8927d-combined-ca-bundle\") pod \"9741f82c-1dce-4e95-9442-c424bcf8927d\" (UID: \"9741f82c-1dce-4e95-9442-c424bcf8927d\") " Feb 19 10:27:56 crc kubenswrapper[4811]: I0219 10:27:56.875642 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d369ea05-6a36-44fd-bdd0-26eead9ba16d-combined-ca-bundle\") pod \"d369ea05-6a36-44fd-bdd0-26eead9ba16d\" (UID: \"d369ea05-6a36-44fd-bdd0-26eead9ba16d\") " Feb 19 10:27:56 crc kubenswrapper[4811]: I0219 10:27:56.875700 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d369ea05-6a36-44fd-bdd0-26eead9ba16d-config-data\") pod \"d369ea05-6a36-44fd-bdd0-26eead9ba16d\" (UID: \"d369ea05-6a36-44fd-bdd0-26eead9ba16d\") " Feb 19 10:27:56 crc kubenswrapper[4811]: I0219 10:27:56.875827 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d369ea05-6a36-44fd-bdd0-26eead9ba16d-scripts\") pod \"d369ea05-6a36-44fd-bdd0-26eead9ba16d\" (UID: \"d369ea05-6a36-44fd-bdd0-26eead9ba16d\") " Feb 19 10:27:56 crc kubenswrapper[4811]: I0219 10:27:56.875902 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-272xl\" (UniqueName: \"kubernetes.io/projected/9741f82c-1dce-4e95-9442-c424bcf8927d-kube-api-access-272xl\") pod \"9741f82c-1dce-4e95-9442-c424bcf8927d\" (UID: \"9741f82c-1dce-4e95-9442-c424bcf8927d\") " Feb 19 10:27:56 crc kubenswrapper[4811]: I0219 10:27:56.876998 4811 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d369ea05-6a36-44fd-bdd0-26eead9ba16d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:56 crc kubenswrapper[4811]: I0219 10:27:56.885971 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9741f82c-1dce-4e95-9442-c424bcf8927d-kube-api-access-272xl" (OuterVolumeSpecName: "kube-api-access-272xl") pod "9741f82c-1dce-4e95-9442-c424bcf8927d" (UID: "9741f82c-1dce-4e95-9442-c424bcf8927d"). InnerVolumeSpecName "kube-api-access-272xl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:27:56 crc kubenswrapper[4811]: I0219 10:27:56.887242 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9741f82c-1dce-4e95-9442-c424bcf8927d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "9741f82c-1dce-4e95-9442-c424bcf8927d" (UID: "9741f82c-1dce-4e95-9442-c424bcf8927d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:56 crc kubenswrapper[4811]: I0219 10:27:56.900718 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d369ea05-6a36-44fd-bdd0-26eead9ba16d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d369ea05-6a36-44fd-bdd0-26eead9ba16d" (UID: "d369ea05-6a36-44fd-bdd0-26eead9ba16d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:56 crc kubenswrapper[4811]: I0219 10:27:56.910676 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d369ea05-6a36-44fd-bdd0-26eead9ba16d-kube-api-access-7495v" (OuterVolumeSpecName: "kube-api-access-7495v") pod "d369ea05-6a36-44fd-bdd0-26eead9ba16d" (UID: "d369ea05-6a36-44fd-bdd0-26eead9ba16d"). InnerVolumeSpecName "kube-api-access-7495v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:27:56 crc kubenswrapper[4811]: I0219 10:27:56.920388 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d369ea05-6a36-44fd-bdd0-26eead9ba16d-scripts" (OuterVolumeSpecName: "scripts") pod "d369ea05-6a36-44fd-bdd0-26eead9ba16d" (UID: "d369ea05-6a36-44fd-bdd0-26eead9ba16d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:56 crc kubenswrapper[4811]: I0219 10:27:56.935310 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9741f82c-1dce-4e95-9442-c424bcf8927d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9741f82c-1dce-4e95-9442-c424bcf8927d" (UID: "9741f82c-1dce-4e95-9442-c424bcf8927d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:56 crc kubenswrapper[4811]: I0219 10:27:56.948955 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d369ea05-6a36-44fd-bdd0-26eead9ba16d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d369ea05-6a36-44fd-bdd0-26eead9ba16d" (UID: "d369ea05-6a36-44fd-bdd0-26eead9ba16d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:56 crc kubenswrapper[4811]: I0219 10:27:56.979435 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7495v\" (UniqueName: \"kubernetes.io/projected/d369ea05-6a36-44fd-bdd0-26eead9ba16d-kube-api-access-7495v\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:56 crc kubenswrapper[4811]: I0219 10:27:56.979484 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9741f82c-1dce-4e95-9442-c424bcf8927d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:56 crc kubenswrapper[4811]: I0219 10:27:56.979493 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d369ea05-6a36-44fd-bdd0-26eead9ba16d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:56 crc kubenswrapper[4811]: I0219 10:27:56.979507 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d369ea05-6a36-44fd-bdd0-26eead9ba16d-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:56 crc kubenswrapper[4811]: I0219 10:27:56.979493 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d369ea05-6a36-44fd-bdd0-26eead9ba16d-config-data" (OuterVolumeSpecName: "config-data") pod "d369ea05-6a36-44fd-bdd0-26eead9ba16d" (UID: "d369ea05-6a36-44fd-bdd0-26eead9ba16d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:27:56 crc kubenswrapper[4811]: I0219 10:27:56.979517 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-272xl\" (UniqueName: \"kubernetes.io/projected/9741f82c-1dce-4e95-9442-c424bcf8927d-kube-api-access-272xl\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:56 crc kubenswrapper[4811]: I0219 10:27:56.979601 4811 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9741f82c-1dce-4e95-9442-c424bcf8927d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:56 crc kubenswrapper[4811]: I0219 10:27:56.979616 4811 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d369ea05-6a36-44fd-bdd0-26eead9ba16d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.082233 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d369ea05-6a36-44fd-bdd0-26eead9ba16d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.130028 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xggvg" event={"ID":"9741f82c-1dce-4e95-9442-c424bcf8927d","Type":"ContainerDied","Data":"d3222b0113e0f7bdc05a67c2fd673bfeccd711e27abe7ecbf017976e60dbeedc"} Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.130083 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3222b0113e0f7bdc05a67c2fd673bfeccd711e27abe7ecbf017976e60dbeedc" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.130086 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xggvg" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.133335 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rqgzq" event={"ID":"d369ea05-6a36-44fd-bdd0-26eead9ba16d","Type":"ContainerDied","Data":"4b34f24b5cac709a040725ce3c966f312348759f28121e3d635fc4d4b3a2190f"} Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.133408 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b34f24b5cac709a040725ce3c966f312348759f28121e3d635fc4d4b3a2190f" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.133443 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rqgzq" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.472684 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 10:27:57 crc kubenswrapper[4811]: E0219 10:27:57.473397 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9741f82c-1dce-4e95-9442-c424bcf8927d" containerName="barbican-db-sync" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.473416 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="9741f82c-1dce-4e95-9442-c424bcf8927d" containerName="barbican-db-sync" Feb 19 10:27:57 crc kubenswrapper[4811]: E0219 10:27:57.473461 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d369ea05-6a36-44fd-bdd0-26eead9ba16d" containerName="cinder-db-sync" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.473467 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="d369ea05-6a36-44fd-bdd0-26eead9ba16d" containerName="cinder-db-sync" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.473643 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="d369ea05-6a36-44fd-bdd0-26eead9ba16d" containerName="cinder-db-sync" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.473656 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="9741f82c-1dce-4e95-9442-c424bcf8927d" containerName="barbican-db-sync" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.474523 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.482019 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.482344 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.482486 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.482630 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-rfbsz" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.494062 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e15f023-b95a-479c-a3b5-902a4b77ff7a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5e15f023-b95a-479c-a3b5-902a4b77ff7a\") " pod="openstack/cinder-scheduler-0" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.494209 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e15f023-b95a-479c-a3b5-902a4b77ff7a-config-data\") pod \"cinder-scheduler-0\" (UID: \"5e15f023-b95a-479c-a3b5-902a4b77ff7a\") " pod="openstack/cinder-scheduler-0" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.494244 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e15f023-b95a-479c-a3b5-902a4b77ff7a-scripts\") pod \"cinder-scheduler-0\" (UID: \"5e15f023-b95a-479c-a3b5-902a4b77ff7a\") " pod="openstack/cinder-scheduler-0" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.494272 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e15f023-b95a-479c-a3b5-902a4b77ff7a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5e15f023-b95a-479c-a3b5-902a4b77ff7a\") " pod="openstack/cinder-scheduler-0" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.494425 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5e15f023-b95a-479c-a3b5-902a4b77ff7a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5e15f023-b95a-479c-a3b5-902a4b77ff7a\") " pod="openstack/cinder-scheduler-0" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.494485 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv6tx\" (UniqueName: \"kubernetes.io/projected/5e15f023-b95a-479c-a3b5-902a4b77ff7a-kube-api-access-gv6tx\") pod \"cinder-scheduler-0\" (UID: \"5e15f023-b95a-479c-a3b5-902a4b77ff7a\") " pod="openstack/cinder-scheduler-0" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.504269 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.583549 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b895b5785-bwcm2"] Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.587777 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b895b5785-bwcm2" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.596187 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82bf6795-28db-4abd-923a-9706c1ca71d5-ovsdbserver-nb\") pod \"dnsmasq-dns-b895b5785-bwcm2\" (UID: \"82bf6795-28db-4abd-923a-9706c1ca71d5\") " pod="openstack/dnsmasq-dns-b895b5785-bwcm2" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.596250 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e15f023-b95a-479c-a3b5-902a4b77ff7a-config-data\") pod \"cinder-scheduler-0\" (UID: \"5e15f023-b95a-479c-a3b5-902a4b77ff7a\") " pod="openstack/cinder-scheduler-0" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.596372 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82bf6795-28db-4abd-923a-9706c1ca71d5-dns-svc\") pod \"dnsmasq-dns-b895b5785-bwcm2\" (UID: \"82bf6795-28db-4abd-923a-9706c1ca71d5\") " pod="openstack/dnsmasq-dns-b895b5785-bwcm2" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.596631 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e15f023-b95a-479c-a3b5-902a4b77ff7a-scripts\") pod \"cinder-scheduler-0\" (UID: \"5e15f023-b95a-479c-a3b5-902a4b77ff7a\") " pod="openstack/cinder-scheduler-0" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.596684 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e15f023-b95a-479c-a3b5-902a4b77ff7a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5e15f023-b95a-479c-a3b5-902a4b77ff7a\") " pod="openstack/cinder-scheduler-0" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.596904 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwlwp\" (UniqueName: \"kubernetes.io/projected/82bf6795-28db-4abd-923a-9706c1ca71d5-kube-api-access-vwlwp\") pod \"dnsmasq-dns-b895b5785-bwcm2\" (UID: \"82bf6795-28db-4abd-923a-9706c1ca71d5\") " pod="openstack/dnsmasq-dns-b895b5785-bwcm2" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.596934 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5e15f023-b95a-479c-a3b5-902a4b77ff7a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5e15f023-b95a-479c-a3b5-902a4b77ff7a\") " pod="openstack/cinder-scheduler-0" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.597016 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv6tx\" (UniqueName: \"kubernetes.io/projected/5e15f023-b95a-479c-a3b5-902a4b77ff7a-kube-api-access-gv6tx\") pod \"cinder-scheduler-0\" (UID: \"5e15f023-b95a-479c-a3b5-902a4b77ff7a\") " pod="openstack/cinder-scheduler-0" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.597202 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82bf6795-28db-4abd-923a-9706c1ca71d5-config\") pod \"dnsmasq-dns-b895b5785-bwcm2\" (UID: \"82bf6795-28db-4abd-923a-9706c1ca71d5\") " pod="openstack/dnsmasq-dns-b895b5785-bwcm2" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.597279 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e15f023-b95a-479c-a3b5-902a4b77ff7a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5e15f023-b95a-479c-a3b5-902a4b77ff7a\") " pod="openstack/cinder-scheduler-0" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.597502 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82bf6795-28db-4abd-923a-9706c1ca71d5-ovsdbserver-sb\") pod \"dnsmasq-dns-b895b5785-bwcm2\" (UID: \"82bf6795-28db-4abd-923a-9706c1ca71d5\") " pod="openstack/dnsmasq-dns-b895b5785-bwcm2" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.597538 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82bf6795-28db-4abd-923a-9706c1ca71d5-dns-swift-storage-0\") pod \"dnsmasq-dns-b895b5785-bwcm2\" (UID: \"82bf6795-28db-4abd-923a-9706c1ca71d5\") " pod="openstack/dnsmasq-dns-b895b5785-bwcm2" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.599002 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5e15f023-b95a-479c-a3b5-902a4b77ff7a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5e15f023-b95a-479c-a3b5-902a4b77ff7a\") " pod="openstack/cinder-scheduler-0" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.602812 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e15f023-b95a-479c-a3b5-902a4b77ff7a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5e15f023-b95a-479c-a3b5-902a4b77ff7a\") " pod="openstack/cinder-scheduler-0" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.614128 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e15f023-b95a-479c-a3b5-902a4b77ff7a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5e15f023-b95a-479c-a3b5-902a4b77ff7a\") " pod="openstack/cinder-scheduler-0" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.614632 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e15f023-b95a-479c-a3b5-902a4b77ff7a-scripts\") pod \"cinder-scheduler-0\" (UID: \"5e15f023-b95a-479c-a3b5-902a4b77ff7a\") " pod="openstack/cinder-scheduler-0" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.621836 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e15f023-b95a-479c-a3b5-902a4b77ff7a-config-data\") pod \"cinder-scheduler-0\" (UID: \"5e15f023-b95a-479c-a3b5-902a4b77ff7a\") " pod="openstack/cinder-scheduler-0" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.635124 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b895b5785-bwcm2"] Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.642039 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv6tx\" (UniqueName: \"kubernetes.io/projected/5e15f023-b95a-479c-a3b5-902a4b77ff7a-kube-api-access-gv6tx\") pod \"cinder-scheduler-0\" (UID: \"5e15f023-b95a-479c-a3b5-902a4b77ff7a\") " pod="openstack/cinder-scheduler-0" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.699065 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82bf6795-28db-4abd-923a-9706c1ca71d5-config\") pod \"dnsmasq-dns-b895b5785-bwcm2\" (UID: \"82bf6795-28db-4abd-923a-9706c1ca71d5\") " pod="openstack/dnsmasq-dns-b895b5785-bwcm2" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.699145 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82bf6795-28db-4abd-923a-9706c1ca71d5-ovsdbserver-sb\") pod \"dnsmasq-dns-b895b5785-bwcm2\" (UID: \"82bf6795-28db-4abd-923a-9706c1ca71d5\") " pod="openstack/dnsmasq-dns-b895b5785-bwcm2" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.699174 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82bf6795-28db-4abd-923a-9706c1ca71d5-dns-swift-storage-0\") pod \"dnsmasq-dns-b895b5785-bwcm2\" (UID: \"82bf6795-28db-4abd-923a-9706c1ca71d5\") " pod="openstack/dnsmasq-dns-b895b5785-bwcm2" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.699207 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82bf6795-28db-4abd-923a-9706c1ca71d5-ovsdbserver-nb\") pod \"dnsmasq-dns-b895b5785-bwcm2\" (UID: \"82bf6795-28db-4abd-923a-9706c1ca71d5\") " pod="openstack/dnsmasq-dns-b895b5785-bwcm2" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.699226 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82bf6795-28db-4abd-923a-9706c1ca71d5-dns-svc\") pod \"dnsmasq-dns-b895b5785-bwcm2\" (UID: \"82bf6795-28db-4abd-923a-9706c1ca71d5\") " pod="openstack/dnsmasq-dns-b895b5785-bwcm2" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.699283 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwlwp\" (UniqueName: \"kubernetes.io/projected/82bf6795-28db-4abd-923a-9706c1ca71d5-kube-api-access-vwlwp\") pod \"dnsmasq-dns-b895b5785-bwcm2\" (UID: \"82bf6795-28db-4abd-923a-9706c1ca71d5\") " pod="openstack/dnsmasq-dns-b895b5785-bwcm2" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.701169 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82bf6795-28db-4abd-923a-9706c1ca71d5-config\") pod \"dnsmasq-dns-b895b5785-bwcm2\" (UID: \"82bf6795-28db-4abd-923a-9706c1ca71d5\") " pod="openstack/dnsmasq-dns-b895b5785-bwcm2" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.704612 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82bf6795-28db-4abd-923a-9706c1ca71d5-ovsdbserver-nb\") pod \"dnsmasq-dns-b895b5785-bwcm2\" (UID: \"82bf6795-28db-4abd-923a-9706c1ca71d5\") " pod="openstack/dnsmasq-dns-b895b5785-bwcm2" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.705387 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82bf6795-28db-4abd-923a-9706c1ca71d5-dns-swift-storage-0\") pod \"dnsmasq-dns-b895b5785-bwcm2\" (UID: \"82bf6795-28db-4abd-923a-9706c1ca71d5\") " pod="openstack/dnsmasq-dns-b895b5785-bwcm2" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.706108 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82bf6795-28db-4abd-923a-9706c1ca71d5-dns-svc\") pod \"dnsmasq-dns-b895b5785-bwcm2\" (UID: \"82bf6795-28db-4abd-923a-9706c1ca71d5\") " pod="openstack/dnsmasq-dns-b895b5785-bwcm2" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.713267 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82bf6795-28db-4abd-923a-9706c1ca71d5-ovsdbserver-sb\") pod \"dnsmasq-dns-b895b5785-bwcm2\" (UID: \"82bf6795-28db-4abd-923a-9706c1ca71d5\") " pod="openstack/dnsmasq-dns-b895b5785-bwcm2" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.726126 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwlwp\" (UniqueName: \"kubernetes.io/projected/82bf6795-28db-4abd-923a-9706c1ca71d5-kube-api-access-vwlwp\") pod \"dnsmasq-dns-b895b5785-bwcm2\" (UID: \"82bf6795-28db-4abd-923a-9706c1ca71d5\") " pod="openstack/dnsmasq-dns-b895b5785-bwcm2" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.806958 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.858839 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.888816 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.890046 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.895003 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.904730 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/918b0763-de1c-49fa-8ade-2410fb2e1af3-scripts\") pod \"cinder-api-0\" (UID: \"918b0763-de1c-49fa-8ade-2410fb2e1af3\") " pod="openstack/cinder-api-0" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.904812 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/918b0763-de1c-49fa-8ade-2410fb2e1af3-logs\") pod \"cinder-api-0\" (UID: \"918b0763-de1c-49fa-8ade-2410fb2e1af3\") " pod="openstack/cinder-api-0" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.904853 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918b0763-de1c-49fa-8ade-2410fb2e1af3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"918b0763-de1c-49fa-8ade-2410fb2e1af3\") " pod="openstack/cinder-api-0" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.904893 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/918b0763-de1c-49fa-8ade-2410fb2e1af3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"918b0763-de1c-49fa-8ade-2410fb2e1af3\") " pod="openstack/cinder-api-0" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.904928 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/918b0763-de1c-49fa-8ade-2410fb2e1af3-config-data-custom\") pod \"cinder-api-0\" (UID: \"918b0763-de1c-49fa-8ade-2410fb2e1af3\") " pod="openstack/cinder-api-0" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.904955 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/918b0763-de1c-49fa-8ade-2410fb2e1af3-config-data\") pod \"cinder-api-0\" (UID: \"918b0763-de1c-49fa-8ade-2410fb2e1af3\") " pod="openstack/cinder-api-0" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.904978 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfs2c\" (UniqueName: \"kubernetes.io/projected/918b0763-de1c-49fa-8ade-2410fb2e1af3-kube-api-access-zfs2c\") pod \"cinder-api-0\" (UID: \"918b0763-de1c-49fa-8ade-2410fb2e1af3\") " pod="openstack/cinder-api-0" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.922500 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b895b5785-bwcm2" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.995902 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 10:27:57 crc kubenswrapper[4811]: I0219 10:27:57.995986 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.006738 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/918b0763-de1c-49fa-8ade-2410fb2e1af3-scripts\") pod \"cinder-api-0\" (UID: \"918b0763-de1c-49fa-8ade-2410fb2e1af3\") " pod="openstack/cinder-api-0" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.006839 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/918b0763-de1c-49fa-8ade-2410fb2e1af3-logs\") pod \"cinder-api-0\" (UID: \"918b0763-de1c-49fa-8ade-2410fb2e1af3\") " pod="openstack/cinder-api-0" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.006882 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918b0763-de1c-49fa-8ade-2410fb2e1af3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"918b0763-de1c-49fa-8ade-2410fb2e1af3\") " pod="openstack/cinder-api-0" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.006931 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/918b0763-de1c-49fa-8ade-2410fb2e1af3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"918b0763-de1c-49fa-8ade-2410fb2e1af3\") " pod="openstack/cinder-api-0" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.006971 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/918b0763-de1c-49fa-8ade-2410fb2e1af3-config-data-custom\") pod \"cinder-api-0\" (UID: \"918b0763-de1c-49fa-8ade-2410fb2e1af3\") " pod="openstack/cinder-api-0" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.007006 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/918b0763-de1c-49fa-8ade-2410fb2e1af3-config-data\") pod \"cinder-api-0\" (UID: \"918b0763-de1c-49fa-8ade-2410fb2e1af3\") " pod="openstack/cinder-api-0" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.007039 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfs2c\" (UniqueName: \"kubernetes.io/projected/918b0763-de1c-49fa-8ade-2410fb2e1af3-kube-api-access-zfs2c\") pod \"cinder-api-0\" (UID: \"918b0763-de1c-49fa-8ade-2410fb2e1af3\") " pod="openstack/cinder-api-0" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.007252 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/918b0763-de1c-49fa-8ade-2410fb2e1af3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"918b0763-de1c-49fa-8ade-2410fb2e1af3\") " pod="openstack/cinder-api-0" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.007607 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/918b0763-de1c-49fa-8ade-2410fb2e1af3-logs\") pod \"cinder-api-0\" (UID: \"918b0763-de1c-49fa-8ade-2410fb2e1af3\") " pod="openstack/cinder-api-0" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.026020 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/918b0763-de1c-49fa-8ade-2410fb2e1af3-scripts\") pod \"cinder-api-0\" (UID: \"918b0763-de1c-49fa-8ade-2410fb2e1af3\") " pod="openstack/cinder-api-0" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.029786 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/918b0763-de1c-49fa-8ade-2410fb2e1af3-config-data-custom\") pod \"cinder-api-0\" (UID: \"918b0763-de1c-49fa-8ade-2410fb2e1af3\") " pod="openstack/cinder-api-0" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.047921 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918b0763-de1c-49fa-8ade-2410fb2e1af3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"918b0763-de1c-49fa-8ade-2410fb2e1af3\") " pod="openstack/cinder-api-0" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.054598 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/918b0763-de1c-49fa-8ade-2410fb2e1af3-config-data\") pod \"cinder-api-0\" (UID: \"918b0763-de1c-49fa-8ade-2410fb2e1af3\") " pod="openstack/cinder-api-0" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.063766 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.064347 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfs2c\" (UniqueName: \"kubernetes.io/projected/918b0763-de1c-49fa-8ade-2410fb2e1af3-kube-api-access-zfs2c\") pod \"cinder-api-0\" (UID: \"918b0763-de1c-49fa-8ade-2410fb2e1af3\") " pod="openstack/cinder-api-0" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.072808 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.144748 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.145312 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.233517 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.395616 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7cc994bfc5-bf96r"] Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.406133 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7cc994bfc5-bf96r" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.423230 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.423419 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.423543 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-2jqkh" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.428005 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7cc994bfc5-bf96r"] Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.527323 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slxp6\" (UniqueName: \"kubernetes.io/projected/20451628-f7b0-43b3-b0b9-4ab828db8a77-kube-api-access-slxp6\") pod \"barbican-worker-7cc994bfc5-bf96r\" (UID: \"20451628-f7b0-43b3-b0b9-4ab828db8a77\") " pod="openstack/barbican-worker-7cc994bfc5-bf96r" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.527529 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20451628-f7b0-43b3-b0b9-4ab828db8a77-combined-ca-bundle\") pod \"barbican-worker-7cc994bfc5-bf96r\" (UID: \"20451628-f7b0-43b3-b0b9-4ab828db8a77\") " pod="openstack/barbican-worker-7cc994bfc5-bf96r" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.527581 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20451628-f7b0-43b3-b0b9-4ab828db8a77-logs\") pod \"barbican-worker-7cc994bfc5-bf96r\" (UID: \"20451628-f7b0-43b3-b0b9-4ab828db8a77\") " pod="openstack/barbican-worker-7cc994bfc5-bf96r" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.527654 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20451628-f7b0-43b3-b0b9-4ab828db8a77-config-data\") pod \"barbican-worker-7cc994bfc5-bf96r\" (UID: \"20451628-f7b0-43b3-b0b9-4ab828db8a77\") " pod="openstack/barbican-worker-7cc994bfc5-bf96r" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.527681 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20451628-f7b0-43b3-b0b9-4ab828db8a77-config-data-custom\") pod \"barbican-worker-7cc994bfc5-bf96r\" (UID: \"20451628-f7b0-43b3-b0b9-4ab828db8a77\") " pod="openstack/barbican-worker-7cc994bfc5-bf96r" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.635132 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20451628-f7b0-43b3-b0b9-4ab828db8a77-logs\") pod \"barbican-worker-7cc994bfc5-bf96r\" (UID: \"20451628-f7b0-43b3-b0b9-4ab828db8a77\") " pod="openstack/barbican-worker-7cc994bfc5-bf96r" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.635247 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20451628-f7b0-43b3-b0b9-4ab828db8a77-config-data\") pod \"barbican-worker-7cc994bfc5-bf96r\" (UID: \"20451628-f7b0-43b3-b0b9-4ab828db8a77\") " pod="openstack/barbican-worker-7cc994bfc5-bf96r" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.635270 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20451628-f7b0-43b3-b0b9-4ab828db8a77-config-data-custom\") pod \"barbican-worker-7cc994bfc5-bf96r\" (UID: \"20451628-f7b0-43b3-b0b9-4ab828db8a77\") " pod="openstack/barbican-worker-7cc994bfc5-bf96r" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.635305 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slxp6\" (UniqueName: \"kubernetes.io/projected/20451628-f7b0-43b3-b0b9-4ab828db8a77-kube-api-access-slxp6\") pod \"barbican-worker-7cc994bfc5-bf96r\" (UID: \"20451628-f7b0-43b3-b0b9-4ab828db8a77\") " pod="openstack/barbican-worker-7cc994bfc5-bf96r" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.636205 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20451628-f7b0-43b3-b0b9-4ab828db8a77-logs\") pod \"barbican-worker-7cc994bfc5-bf96r\" (UID: \"20451628-f7b0-43b3-b0b9-4ab828db8a77\") " pod="openstack/barbican-worker-7cc994bfc5-bf96r" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.636428 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20451628-f7b0-43b3-b0b9-4ab828db8a77-combined-ca-bundle\") pod \"barbican-worker-7cc994bfc5-bf96r\" (UID: \"20451628-f7b0-43b3-b0b9-4ab828db8a77\") " pod="openstack/barbican-worker-7cc994bfc5-bf96r" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.655469 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20451628-f7b0-43b3-b0b9-4ab828db8a77-config-data\") pod \"barbican-worker-7cc994bfc5-bf96r\" (UID: \"20451628-f7b0-43b3-b0b9-4ab828db8a77\") " pod="openstack/barbican-worker-7cc994bfc5-bf96r" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.663645 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20451628-f7b0-43b3-b0b9-4ab828db8a77-config-data-custom\") pod \"barbican-worker-7cc994bfc5-bf96r\" (UID: \"20451628-f7b0-43b3-b0b9-4ab828db8a77\") " pod="openstack/barbican-worker-7cc994bfc5-bf96r" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.665483 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20451628-f7b0-43b3-b0b9-4ab828db8a77-combined-ca-bundle\") pod \"barbican-worker-7cc994bfc5-bf96r\" (UID: \"20451628-f7b0-43b3-b0b9-4ab828db8a77\") " pod="openstack/barbican-worker-7cc994bfc5-bf96r" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.666271 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-594c4b4cb6-89gd8"] Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.681039 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-594c4b4cb6-89gd8" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.713704 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.808109 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slxp6\" (UniqueName: \"kubernetes.io/projected/20451628-f7b0-43b3-b0b9-4ab828db8a77-kube-api-access-slxp6\") pod \"barbican-worker-7cc994bfc5-bf96r\" (UID: \"20451628-f7b0-43b3-b0b9-4ab828db8a77\") " pod="openstack/barbican-worker-7cc994bfc5-bf96r" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.838565 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b895b5785-bwcm2"] Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.890580 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51d8831e-3334-4e4a-84b1-98912b1dc6f2-config-data-custom\") pod \"barbican-keystone-listener-594c4b4cb6-89gd8\" (UID: \"51d8831e-3334-4e4a-84b1-98912b1dc6f2\") " pod="openstack/barbican-keystone-listener-594c4b4cb6-89gd8" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.890696 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51d8831e-3334-4e4a-84b1-98912b1dc6f2-logs\") pod \"barbican-keystone-listener-594c4b4cb6-89gd8\" (UID: \"51d8831e-3334-4e4a-84b1-98912b1dc6f2\") " pod="openstack/barbican-keystone-listener-594c4b4cb6-89gd8" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.890751 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51d8831e-3334-4e4a-84b1-98912b1dc6f2-config-data\") pod \"barbican-keystone-listener-594c4b4cb6-89gd8\" (UID: \"51d8831e-3334-4e4a-84b1-98912b1dc6f2\") " pod="openstack/barbican-keystone-listener-594c4b4cb6-89gd8" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.890774 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmmwp\" (UniqueName: \"kubernetes.io/projected/51d8831e-3334-4e4a-84b1-98912b1dc6f2-kube-api-access-gmmwp\") pod \"barbican-keystone-listener-594c4b4cb6-89gd8\" (UID: \"51d8831e-3334-4e4a-84b1-98912b1dc6f2\") " pod="openstack/barbican-keystone-listener-594c4b4cb6-89gd8" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.890828 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d8831e-3334-4e4a-84b1-98912b1dc6f2-combined-ca-bundle\") pod \"barbican-keystone-listener-594c4b4cb6-89gd8\" (UID: \"51d8831e-3334-4e4a-84b1-98912b1dc6f2\") " pod="openstack/barbican-keystone-listener-594c4b4cb6-89gd8" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.890948 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-594c4b4cb6-89gd8"] Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.937531 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-l669h"] Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.939695 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-l669h" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.966575 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-l669h"] Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.978463 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-75675fb4db-4qcj7"] Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.984496 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75675fb4db-4qcj7" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.989382 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.993130 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51d8831e-3334-4e4a-84b1-98912b1dc6f2-config-data-custom\") pod \"barbican-keystone-listener-594c4b4cb6-89gd8\" (UID: \"51d8831e-3334-4e4a-84b1-98912b1dc6f2\") " pod="openstack/barbican-keystone-listener-594c4b4cb6-89gd8" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.993280 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51d8831e-3334-4e4a-84b1-98912b1dc6f2-logs\") pod \"barbican-keystone-listener-594c4b4cb6-89gd8\" (UID: \"51d8831e-3334-4e4a-84b1-98912b1dc6f2\") " pod="openstack/barbican-keystone-listener-594c4b4cb6-89gd8" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.993373 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51d8831e-3334-4e4a-84b1-98912b1dc6f2-config-data\") pod \"barbican-keystone-listener-594c4b4cb6-89gd8\" (UID: \"51d8831e-3334-4e4a-84b1-98912b1dc6f2\") " pod="openstack/barbican-keystone-listener-594c4b4cb6-89gd8" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.993400 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmmwp\" (UniqueName: \"kubernetes.io/projected/51d8831e-3334-4e4a-84b1-98912b1dc6f2-kube-api-access-gmmwp\") pod \"barbican-keystone-listener-594c4b4cb6-89gd8\" (UID: \"51d8831e-3334-4e4a-84b1-98912b1dc6f2\") " pod="openstack/barbican-keystone-listener-594c4b4cb6-89gd8" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.993530 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d8831e-3334-4e4a-84b1-98912b1dc6f2-combined-ca-bundle\") pod \"barbican-keystone-listener-594c4b4cb6-89gd8\" (UID: \"51d8831e-3334-4e4a-84b1-98912b1dc6f2\") " pod="openstack/barbican-keystone-listener-594c4b4cb6-89gd8" Feb 19 10:27:58 crc kubenswrapper[4811]: I0219 10:27:58.999147 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-75675fb4db-4qcj7"] Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.000257 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51d8831e-3334-4e4a-84b1-98912b1dc6f2-logs\") pod \"barbican-keystone-listener-594c4b4cb6-89gd8\" (UID: \"51d8831e-3334-4e4a-84b1-98912b1dc6f2\") " pod="openstack/barbican-keystone-listener-594c4b4cb6-89gd8" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.005834 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51d8831e-3334-4e4a-84b1-98912b1dc6f2-config-data-custom\") pod \"barbican-keystone-listener-594c4b4cb6-89gd8\" (UID: \"51d8831e-3334-4e4a-84b1-98912b1dc6f2\") " pod="openstack/barbican-keystone-listener-594c4b4cb6-89gd8" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.022655 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51d8831e-3334-4e4a-84b1-98912b1dc6f2-config-data\") pod \"barbican-keystone-listener-594c4b4cb6-89gd8\" (UID: \"51d8831e-3334-4e4a-84b1-98912b1dc6f2\") " pod="openstack/barbican-keystone-listener-594c4b4cb6-89gd8" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.025025 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d8831e-3334-4e4a-84b1-98912b1dc6f2-combined-ca-bundle\") pod \"barbican-keystone-listener-594c4b4cb6-89gd8\" (UID: \"51d8831e-3334-4e4a-84b1-98912b1dc6f2\") " pod="openstack/barbican-keystone-listener-594c4b4cb6-89gd8" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.051439 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmmwp\" (UniqueName: \"kubernetes.io/projected/51d8831e-3334-4e4a-84b1-98912b1dc6f2-kube-api-access-gmmwp\") pod \"barbican-keystone-listener-594c4b4cb6-89gd8\" (UID: \"51d8831e-3334-4e4a-84b1-98912b1dc6f2\") " pod="openstack/barbican-keystone-listener-594c4b4cb6-89gd8" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.065964 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7cc994bfc5-bf96r" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.096379 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cb1b23f-5500-410f-af9c-8e47ed4d0250-combined-ca-bundle\") pod \"barbican-api-75675fb4db-4qcj7\" (UID: \"4cb1b23f-5500-410f-af9c-8e47ed4d0250\") " pod="openstack/barbican-api-75675fb4db-4qcj7" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.096538 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cda0926f-a0e7-488c-a0a4-6404d2720495-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-l669h\" (UID: \"cda0926f-a0e7-488c-a0a4-6404d2720495\") " pod="openstack/dnsmasq-dns-5c9776ccc5-l669h" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.096575 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg295\" (UniqueName: \"kubernetes.io/projected/4cb1b23f-5500-410f-af9c-8e47ed4d0250-kube-api-access-bg295\") pod \"barbican-api-75675fb4db-4qcj7\" (UID: \"4cb1b23f-5500-410f-af9c-8e47ed4d0250\") " pod="openstack/barbican-api-75675fb4db-4qcj7" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.096628 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cb1b23f-5500-410f-af9c-8e47ed4d0250-config-data\") pod \"barbican-api-75675fb4db-4qcj7\" (UID: \"4cb1b23f-5500-410f-af9c-8e47ed4d0250\") " pod="openstack/barbican-api-75675fb4db-4qcj7" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.096652 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cda0926f-a0e7-488c-a0a4-6404d2720495-config\") pod \"dnsmasq-dns-5c9776ccc5-l669h\" (UID: \"cda0926f-a0e7-488c-a0a4-6404d2720495\") " pod="openstack/dnsmasq-dns-5c9776ccc5-l669h" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.096682 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4cb1b23f-5500-410f-af9c-8e47ed4d0250-config-data-custom\") pod \"barbican-api-75675fb4db-4qcj7\" (UID: \"4cb1b23f-5500-410f-af9c-8e47ed4d0250\") " pod="openstack/barbican-api-75675fb4db-4qcj7" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.096716 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cda0926f-a0e7-488c-a0a4-6404d2720495-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-l669h\" (UID: \"cda0926f-a0e7-488c-a0a4-6404d2720495\") " pod="openstack/dnsmasq-dns-5c9776ccc5-l669h" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.096752 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cb1b23f-5500-410f-af9c-8e47ed4d0250-logs\") pod \"barbican-api-75675fb4db-4qcj7\" (UID: \"4cb1b23f-5500-410f-af9c-8e47ed4d0250\") " pod="openstack/barbican-api-75675fb4db-4qcj7" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.096770 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cda0926f-a0e7-488c-a0a4-6404d2720495-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-l669h\" (UID: \"cda0926f-a0e7-488c-a0a4-6404d2720495\") " pod="openstack/dnsmasq-dns-5c9776ccc5-l669h" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.096791 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj4z5\" (UniqueName: \"kubernetes.io/projected/cda0926f-a0e7-488c-a0a4-6404d2720495-kube-api-access-rj4z5\") pod \"dnsmasq-dns-5c9776ccc5-l669h\" (UID: \"cda0926f-a0e7-488c-a0a4-6404d2720495\") " pod="openstack/dnsmasq-dns-5c9776ccc5-l669h" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.096842 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cda0926f-a0e7-488c-a0a4-6404d2720495-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-l669h\" (UID: \"cda0926f-a0e7-488c-a0a4-6404d2720495\") " pod="openstack/dnsmasq-dns-5c9776ccc5-l669h" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.179751 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-594c4b4cb6-89gd8" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.205516 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cda0926f-a0e7-488c-a0a4-6404d2720495-config\") pod \"dnsmasq-dns-5c9776ccc5-l669h\" (UID: \"cda0926f-a0e7-488c-a0a4-6404d2720495\") " pod="openstack/dnsmasq-dns-5c9776ccc5-l669h" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.205574 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4cb1b23f-5500-410f-af9c-8e47ed4d0250-config-data-custom\") pod \"barbican-api-75675fb4db-4qcj7\" (UID: \"4cb1b23f-5500-410f-af9c-8e47ed4d0250\") " pod="openstack/barbican-api-75675fb4db-4qcj7" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.205609 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cda0926f-a0e7-488c-a0a4-6404d2720495-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-l669h\" (UID: \"cda0926f-a0e7-488c-a0a4-6404d2720495\") " pod="openstack/dnsmasq-dns-5c9776ccc5-l669h" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.205639 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cb1b23f-5500-410f-af9c-8e47ed4d0250-logs\") pod \"barbican-api-75675fb4db-4qcj7\" (UID: \"4cb1b23f-5500-410f-af9c-8e47ed4d0250\") " pod="openstack/barbican-api-75675fb4db-4qcj7" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.205658 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cda0926f-a0e7-488c-a0a4-6404d2720495-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-l669h\" (UID: \"cda0926f-a0e7-488c-a0a4-6404d2720495\") " pod="openstack/dnsmasq-dns-5c9776ccc5-l669h" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.205678 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj4z5\" (UniqueName: \"kubernetes.io/projected/cda0926f-a0e7-488c-a0a4-6404d2720495-kube-api-access-rj4z5\") pod \"dnsmasq-dns-5c9776ccc5-l669h\" (UID: \"cda0926f-a0e7-488c-a0a4-6404d2720495\") " pod="openstack/dnsmasq-dns-5c9776ccc5-l669h" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.205710 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cda0926f-a0e7-488c-a0a4-6404d2720495-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-l669h\" (UID: \"cda0926f-a0e7-488c-a0a4-6404d2720495\") " pod="openstack/dnsmasq-dns-5c9776ccc5-l669h" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.205733 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cb1b23f-5500-410f-af9c-8e47ed4d0250-combined-ca-bundle\") pod \"barbican-api-75675fb4db-4qcj7\" (UID: \"4cb1b23f-5500-410f-af9c-8e47ed4d0250\") " pod="openstack/barbican-api-75675fb4db-4qcj7" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.205789 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cda0926f-a0e7-488c-a0a4-6404d2720495-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-l669h\" (UID: \"cda0926f-a0e7-488c-a0a4-6404d2720495\") " pod="openstack/dnsmasq-dns-5c9776ccc5-l669h" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.205811 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg295\" (UniqueName: \"kubernetes.io/projected/4cb1b23f-5500-410f-af9c-8e47ed4d0250-kube-api-access-bg295\") pod \"barbican-api-75675fb4db-4qcj7\" (UID: \"4cb1b23f-5500-410f-af9c-8e47ed4d0250\") " pod="openstack/barbican-api-75675fb4db-4qcj7" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.205850 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cb1b23f-5500-410f-af9c-8e47ed4d0250-config-data\") pod \"barbican-api-75675fb4db-4qcj7\" (UID: \"4cb1b23f-5500-410f-af9c-8e47ed4d0250\") " pod="openstack/barbican-api-75675fb4db-4qcj7" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.207101 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cda0926f-a0e7-488c-a0a4-6404d2720495-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-l669h\" (UID: \"cda0926f-a0e7-488c-a0a4-6404d2720495\") " pod="openstack/dnsmasq-dns-5c9776ccc5-l669h" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.207101 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cda0926f-a0e7-488c-a0a4-6404d2720495-config\") pod \"dnsmasq-dns-5c9776ccc5-l669h\" (UID: \"cda0926f-a0e7-488c-a0a4-6404d2720495\") " pod="openstack/dnsmasq-dns-5c9776ccc5-l669h" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.207965 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cda0926f-a0e7-488c-a0a4-6404d2720495-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-l669h\" (UID: \"cda0926f-a0e7-488c-a0a4-6404d2720495\") " pod="openstack/dnsmasq-dns-5c9776ccc5-l669h" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.208392 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cb1b23f-5500-410f-af9c-8e47ed4d0250-logs\") pod \"barbican-api-75675fb4db-4qcj7\" (UID: \"4cb1b23f-5500-410f-af9c-8e47ed4d0250\") " pod="openstack/barbican-api-75675fb4db-4qcj7" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.209385 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cda0926f-a0e7-488c-a0a4-6404d2720495-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-l669h\" (UID: \"cda0926f-a0e7-488c-a0a4-6404d2720495\") " pod="openstack/dnsmasq-dns-5c9776ccc5-l669h" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.209543 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cda0926f-a0e7-488c-a0a4-6404d2720495-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-l669h\" (UID: \"cda0926f-a0e7-488c-a0a4-6404d2720495\") " pod="openstack/dnsmasq-dns-5c9776ccc5-l669h" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.212176 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cb1b23f-5500-410f-af9c-8e47ed4d0250-config-data\") pod \"barbican-api-75675fb4db-4qcj7\" (UID: \"4cb1b23f-5500-410f-af9c-8e47ed4d0250\") " pod="openstack/barbican-api-75675fb4db-4qcj7" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.227558 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4cb1b23f-5500-410f-af9c-8e47ed4d0250-config-data-custom\") pod \"barbican-api-75675fb4db-4qcj7\" (UID: \"4cb1b23f-5500-410f-af9c-8e47ed4d0250\") " pod="openstack/barbican-api-75675fb4db-4qcj7" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.233004 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cb1b23f-5500-410f-af9c-8e47ed4d0250-combined-ca-bundle\") pod \"barbican-api-75675fb4db-4qcj7\" (UID: \"4cb1b23f-5500-410f-af9c-8e47ed4d0250\") " pod="openstack/barbican-api-75675fb4db-4qcj7" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.242422 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj4z5\" (UniqueName: \"kubernetes.io/projected/cda0926f-a0e7-488c-a0a4-6404d2720495-kube-api-access-rj4z5\") pod \"dnsmasq-dns-5c9776ccc5-l669h\" (UID: \"cda0926f-a0e7-488c-a0a4-6404d2720495\") " pod="openstack/dnsmasq-dns-5c9776ccc5-l669h" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.256551 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg295\" (UniqueName: \"kubernetes.io/projected/4cb1b23f-5500-410f-af9c-8e47ed4d0250-kube-api-access-bg295\") pod \"barbican-api-75675fb4db-4qcj7\" (UID: \"4cb1b23f-5500-410f-af9c-8e47ed4d0250\") " pod="openstack/barbican-api-75675fb4db-4qcj7" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.274596 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-l669h" Feb 19 10:27:59 crc kubenswrapper[4811]: I0219 10:27:59.430746 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75675fb4db-4qcj7" Feb 19 10:28:00 crc kubenswrapper[4811]: I0219 10:28:00.022688 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 10:28:00 crc kubenswrapper[4811]: I0219 10:28:00.023061 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 10:28:00 crc kubenswrapper[4811]: I0219 10:28:00.106733 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 10:28:00 crc kubenswrapper[4811]: I0219 10:28:00.107343 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 10:28:00 crc kubenswrapper[4811]: I0219 10:28:00.172424 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 10:28:00 crc kubenswrapper[4811]: I0219 10:28:00.172495 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 10:28:00 crc kubenswrapper[4811]: I0219 10:28:00.648869 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 19 10:28:01 crc kubenswrapper[4811]: E0219 10:28:01.624933 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Feb 19 10:28:01 crc kubenswrapper[4811]: E0219 10:28:01.625641 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j97vh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(45fe4f69-83fa-452c-a5fd-c993af1d0ef9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 10:28:01 crc kubenswrapper[4811]: E0219 10:28:01.626881 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="45fe4f69-83fa-452c-a5fd-c993af1d0ef9" Feb 19 10:28:01 crc kubenswrapper[4811]: I0219 10:28:01.695098 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 10:28:01 crc kubenswrapper[4811]: I0219 10:28:01.695236 4811 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 10:28:01 crc kubenswrapper[4811]: I0219 10:28:01.699610 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-69cb98ff68-7rhp9" Feb 19 10:28:02 crc kubenswrapper[4811]: I0219 10:28:02.185166 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-56b7c77497-mp97q"] Feb 19 10:28:02 crc kubenswrapper[4811]: I0219 10:28:02.194310 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-56b7c77497-mp97q" podUID="970dbfc5-9f21-4adc-93c3-9d9175826a48" containerName="neutron-api" containerID="cri-o://ed71657de0d32430a5f99fcead929e1d48acaf0662b544a9dc5e99bcdea7df3f" gracePeriod=30 Feb 19 10:28:02 crc kubenswrapper[4811]: I0219 10:28:02.196557 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-56b7c77497-mp97q" podUID="970dbfc5-9f21-4adc-93c3-9d9175826a48" containerName="neutron-httpd" containerID="cri-o://2c49e7d8650b0afbd2ceea68b5f3ccd33145f22c54a4211f108d54b5a21d51e0" gracePeriod=30 Feb 19 10:28:02 crc kubenswrapper[4811]: I0219 10:28:02.236475 4811 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 10:28:02 crc kubenswrapper[4811]: I0219 10:28:02.237881 4811 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 10:28:02 crc kubenswrapper[4811]: I0219 10:28:02.237699 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="45fe4f69-83fa-452c-a5fd-c993af1d0ef9" containerName="sg-core" containerID="cri-o://8c353ca3b78d881375f4a0d811b6e09442da7c08f78b00c1db1d58f0b303f577" gracePeriod=30 Feb 19 10:28:02 crc kubenswrapper[4811]: I0219 10:28:02.236765 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="45fe4f69-83fa-452c-a5fd-c993af1d0ef9" containerName="ceilometer-notification-agent" containerID="cri-o://18a9e2fd451f5340195bb7151c0bee0f554bcef0f59794396c92edc82a588cb5" gracePeriod=30 Feb 19 10:28:02 crc kubenswrapper[4811]: I0219 10:28:02.324758 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-599475cf85-sh7k6"] Feb 19 10:28:02 crc kubenswrapper[4811]: I0219 10:28:02.326346 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-599475cf85-sh7k6" Feb 19 10:28:02 crc kubenswrapper[4811]: I0219 10:28:02.386825 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-599475cf85-sh7k6"] Feb 19 10:28:02 crc kubenswrapper[4811]: I0219 10:28:02.442245 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/70197607-b202-4d15-a797-ef5d138c5502-httpd-config\") pod \"neutron-599475cf85-sh7k6\" (UID: \"70197607-b202-4d15-a797-ef5d138c5502\") " pod="openstack/neutron-599475cf85-sh7k6" Feb 19 10:28:02 crc kubenswrapper[4811]: I0219 10:28:02.442326 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70197607-b202-4d15-a797-ef5d138c5502-combined-ca-bundle\") pod \"neutron-599475cf85-sh7k6\" (UID: \"70197607-b202-4d15-a797-ef5d138c5502\") " pod="openstack/neutron-599475cf85-sh7k6" Feb 19 10:28:02 crc kubenswrapper[4811]: I0219 10:28:02.442415 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70197607-b202-4d15-a797-ef5d138c5502-public-tls-certs\") pod \"neutron-599475cf85-sh7k6\" (UID: \"70197607-b202-4d15-a797-ef5d138c5502\") " pod="openstack/neutron-599475cf85-sh7k6" Feb 19 10:28:02 crc kubenswrapper[4811]: I0219 10:28:02.442569 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70197607-b202-4d15-a797-ef5d138c5502-internal-tls-certs\") pod \"neutron-599475cf85-sh7k6\" (UID: \"70197607-b202-4d15-a797-ef5d138c5502\") " pod="openstack/neutron-599475cf85-sh7k6" Feb 19 10:28:02 crc kubenswrapper[4811]: I0219 10:28:02.442604 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/70197607-b202-4d15-a797-ef5d138c5502-ovndb-tls-certs\") pod \"neutron-599475cf85-sh7k6\" (UID: \"70197607-b202-4d15-a797-ef5d138c5502\") " pod="openstack/neutron-599475cf85-sh7k6" Feb 19 10:28:02 crc kubenswrapper[4811]: I0219 10:28:02.442636 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/70197607-b202-4d15-a797-ef5d138c5502-config\") pod \"neutron-599475cf85-sh7k6\" (UID: \"70197607-b202-4d15-a797-ef5d138c5502\") " pod="openstack/neutron-599475cf85-sh7k6" Feb 19 10:28:02 crc kubenswrapper[4811]: I0219 10:28:02.442712 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5td7t\" (UniqueName: \"kubernetes.io/projected/70197607-b202-4d15-a797-ef5d138c5502-kube-api-access-5td7t\") pod \"neutron-599475cf85-sh7k6\" (UID: \"70197607-b202-4d15-a797-ef5d138c5502\") " pod="openstack/neutron-599475cf85-sh7k6" Feb 19 10:28:02 crc kubenswrapper[4811]: I0219 10:28:02.545485 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70197607-b202-4d15-a797-ef5d138c5502-public-tls-certs\") pod \"neutron-599475cf85-sh7k6\" (UID: \"70197607-b202-4d15-a797-ef5d138c5502\") " pod="openstack/neutron-599475cf85-sh7k6" Feb 19 10:28:02 crc kubenswrapper[4811]: I0219 10:28:02.545559 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70197607-b202-4d15-a797-ef5d138c5502-internal-tls-certs\") pod \"neutron-599475cf85-sh7k6\" (UID: \"70197607-b202-4d15-a797-ef5d138c5502\") " pod="openstack/neutron-599475cf85-sh7k6" Feb 19 10:28:02 crc kubenswrapper[4811]: I0219 10:28:02.545593 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/70197607-b202-4d15-a797-ef5d138c5502-ovndb-tls-certs\") pod \"neutron-599475cf85-sh7k6\" (UID: \"70197607-b202-4d15-a797-ef5d138c5502\") " pod="openstack/neutron-599475cf85-sh7k6" Feb 19 10:28:02 crc kubenswrapper[4811]: I0219 10:28:02.545620 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/70197607-b202-4d15-a797-ef5d138c5502-config\") pod \"neutron-599475cf85-sh7k6\" (UID: \"70197607-b202-4d15-a797-ef5d138c5502\") " pod="openstack/neutron-599475cf85-sh7k6" Feb 19 10:28:02 crc kubenswrapper[4811]: I0219 10:28:02.545690 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5td7t\" (UniqueName: \"kubernetes.io/projected/70197607-b202-4d15-a797-ef5d138c5502-kube-api-access-5td7t\") pod \"neutron-599475cf85-sh7k6\" (UID: \"70197607-b202-4d15-a797-ef5d138c5502\") " pod="openstack/neutron-599475cf85-sh7k6" Feb 19 10:28:02 crc kubenswrapper[4811]: I0219 10:28:02.545764 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/70197607-b202-4d15-a797-ef5d138c5502-httpd-config\") pod \"neutron-599475cf85-sh7k6\" (UID: \"70197607-b202-4d15-a797-ef5d138c5502\") " pod="openstack/neutron-599475cf85-sh7k6" Feb 19 10:28:02 crc kubenswrapper[4811]: I0219 10:28:02.545797 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70197607-b202-4d15-a797-ef5d138c5502-combined-ca-bundle\") pod \"neutron-599475cf85-sh7k6\" (UID: \"70197607-b202-4d15-a797-ef5d138c5502\") " pod="openstack/neutron-599475cf85-sh7k6" Feb 19 10:28:02 crc kubenswrapper[4811]: I0219 10:28:02.561727 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-56b7c77497-mp97q" Feb 19 10:28:02 crc kubenswrapper[4811]: I0219 10:28:02.618646 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5td7t\" (UniqueName: \"kubernetes.io/projected/70197607-b202-4d15-a797-ef5d138c5502-kube-api-access-5td7t\") pod \"neutron-599475cf85-sh7k6\" (UID: \"70197607-b202-4d15-a797-ef5d138c5502\") " pod="openstack/neutron-599475cf85-sh7k6" Feb 19 10:28:02 crc kubenswrapper[4811]: I0219 10:28:02.622332 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70197607-b202-4d15-a797-ef5d138c5502-public-tls-certs\") pod \"neutron-599475cf85-sh7k6\" (UID: \"70197607-b202-4d15-a797-ef5d138c5502\") " pod="openstack/neutron-599475cf85-sh7k6" Feb 19 10:28:02 crc kubenswrapper[4811]: I0219 10:28:02.628080 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/70197607-b202-4d15-a797-ef5d138c5502-ovndb-tls-certs\") pod \"neutron-599475cf85-sh7k6\" (UID: \"70197607-b202-4d15-a797-ef5d138c5502\") " pod="openstack/neutron-599475cf85-sh7k6" Feb 19 10:28:02 crc kubenswrapper[4811]: I0219 10:28:02.630496 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70197607-b202-4d15-a797-ef5d138c5502-internal-tls-certs\") pod \"neutron-599475cf85-sh7k6\" (UID: \"70197607-b202-4d15-a797-ef5d138c5502\") " pod="openstack/neutron-599475cf85-sh7k6" Feb 19 10:28:02 crc kubenswrapper[4811]: I0219 10:28:02.639041 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/70197607-b202-4d15-a797-ef5d138c5502-config\") pod \"neutron-599475cf85-sh7k6\" (UID: \"70197607-b202-4d15-a797-ef5d138c5502\") " pod="openstack/neutron-599475cf85-sh7k6" Feb 19 10:28:02 crc kubenswrapper[4811]: I0219 10:28:02.641006 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70197607-b202-4d15-a797-ef5d138c5502-combined-ca-bundle\") pod \"neutron-599475cf85-sh7k6\" (UID: \"70197607-b202-4d15-a797-ef5d138c5502\") " pod="openstack/neutron-599475cf85-sh7k6" Feb 19 10:28:02 crc kubenswrapper[4811]: I0219 10:28:02.648696 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/70197607-b202-4d15-a797-ef5d138c5502-httpd-config\") pod \"neutron-599475cf85-sh7k6\" (UID: \"70197607-b202-4d15-a797-ef5d138c5502\") " pod="openstack/neutron-599475cf85-sh7k6" Feb 19 10:28:02 crc kubenswrapper[4811]: I0219 10:28:02.699653 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-599475cf85-sh7k6" Feb 19 10:28:02 crc kubenswrapper[4811]: I0219 10:28:02.983488 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-75675fb4db-4qcj7"] Feb 19 10:28:03 crc kubenswrapper[4811]: I0219 10:28:03.135567 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7cc994bfc5-bf96r"] Feb 19 10:28:03 crc kubenswrapper[4811]: I0219 10:28:03.198247 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 10:28:03 crc kubenswrapper[4811]: I0219 10:28:03.334667 4811 generic.go:334] "Generic (PLEG): container finished" podID="45fe4f69-83fa-452c-a5fd-c993af1d0ef9" containerID="8c353ca3b78d881375f4a0d811b6e09442da7c08f78b00c1db1d58f0b303f577" exitCode=2 Feb 19 10:28:03 crc kubenswrapper[4811]: I0219 10:28:03.334789 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45fe4f69-83fa-452c-a5fd-c993af1d0ef9","Type":"ContainerDied","Data":"8c353ca3b78d881375f4a0d811b6e09442da7c08f78b00c1db1d58f0b303f577"} Feb 19 10:28:03 crc kubenswrapper[4811]: I0219 10:28:03.342144 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5e15f023-b95a-479c-a3b5-902a4b77ff7a","Type":"ContainerStarted","Data":"e9903100a01cb195d82ab4843d1849274d0fefcae037bd39ae474a5e79a54bdb"} Feb 19 10:28:03 crc kubenswrapper[4811]: I0219 10:28:03.348740 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75675fb4db-4qcj7" event={"ID":"4cb1b23f-5500-410f-af9c-8e47ed4d0250","Type":"ContainerStarted","Data":"a234e7ac45ea2aa0a3b8911c565989208efec6e720549f05e154350fb5801bd8"} Feb 19 10:28:03 crc kubenswrapper[4811]: I0219 10:28:03.356096 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7cc994bfc5-bf96r" event={"ID":"20451628-f7b0-43b3-b0b9-4ab828db8a77","Type":"ContainerStarted","Data":"dc866ed240ac04401e6dc499811812c53b079d4ab103eb343ffa12d80df47ae2"} Feb 19 10:28:03 crc kubenswrapper[4811]: I0219 10:28:03.776005 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-594c4b4cb6-89gd8"] Feb 19 10:28:03 crc kubenswrapper[4811]: I0219 10:28:03.865692 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 19 10:28:03 crc kubenswrapper[4811]: I0219 10:28:03.904618 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-l669h"] Feb 19 10:28:03 crc kubenswrapper[4811]: I0219 10:28:03.927593 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b895b5785-bwcm2"] Feb 19 10:28:04 crc kubenswrapper[4811]: I0219 10:28:04.214570 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-56b7c77497-mp97q" podUID="970dbfc5-9f21-4adc-93c3-9d9175826a48" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.152:9696/\": dial tcp 10.217.0.152:9696: connect: connection refused" Feb 19 10:28:04 crc kubenswrapper[4811]: I0219 10:28:04.253808 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-599475cf85-sh7k6"] Feb 19 10:28:04 crc kubenswrapper[4811]: I0219 10:28:04.412073 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"918b0763-de1c-49fa-8ade-2410fb2e1af3","Type":"ContainerStarted","Data":"70ac7e24b5f0a96c65b53f2e502e702d1d43b0c6f77cc33e7a9597359087ab9f"} Feb 19 10:28:04 crc kubenswrapper[4811]: I0219 10:28:04.428856 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-l669h" event={"ID":"cda0926f-a0e7-488c-a0a4-6404d2720495","Type":"ContainerStarted","Data":"fa9ba624655c71844c854362824d1e85b02df6adf228a26cac2801dc95bb0e00"} Feb 19 10:28:04 crc kubenswrapper[4811]: I0219 10:28:04.438815 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75675fb4db-4qcj7" event={"ID":"4cb1b23f-5500-410f-af9c-8e47ed4d0250","Type":"ContainerStarted","Data":"d9f207c98146fdc8046629b93ab68c689058e595bc876f78079886f61b4393fc"} Feb 19 10:28:04 crc kubenswrapper[4811]: I0219 10:28:04.457967 4811 generic.go:334] "Generic (PLEG): container finished" podID="970dbfc5-9f21-4adc-93c3-9d9175826a48" containerID="2c49e7d8650b0afbd2ceea68b5f3ccd33145f22c54a4211f108d54b5a21d51e0" exitCode=0 Feb 19 10:28:04 crc kubenswrapper[4811]: I0219 10:28:04.458024 4811 generic.go:334] "Generic (PLEG): container finished" podID="970dbfc5-9f21-4adc-93c3-9d9175826a48" containerID="ed71657de0d32430a5f99fcead929e1d48acaf0662b544a9dc5e99bcdea7df3f" exitCode=0 Feb 19 10:28:04 crc kubenswrapper[4811]: I0219 10:28:04.458135 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56b7c77497-mp97q" event={"ID":"970dbfc5-9f21-4adc-93c3-9d9175826a48","Type":"ContainerDied","Data":"2c49e7d8650b0afbd2ceea68b5f3ccd33145f22c54a4211f108d54b5a21d51e0"} Feb 19 10:28:04 crc kubenswrapper[4811]: I0219 10:28:04.458191 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56b7c77497-mp97q" event={"ID":"970dbfc5-9f21-4adc-93c3-9d9175826a48","Type":"ContainerDied","Data":"ed71657de0d32430a5f99fcead929e1d48acaf0662b544a9dc5e99bcdea7df3f"} Feb 19 10:28:04 crc kubenswrapper[4811]: I0219 10:28:04.460337 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-594c4b4cb6-89gd8" event={"ID":"51d8831e-3334-4e4a-84b1-98912b1dc6f2","Type":"ContainerStarted","Data":"830aca54ba7ca656f7209a1aad12115a87209e04f4739c4ae3959bd5d65cd80a"} Feb 19 10:28:04 crc kubenswrapper[4811]: I0219 10:28:04.476272 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-599475cf85-sh7k6" event={"ID":"70197607-b202-4d15-a797-ef5d138c5502","Type":"ContainerStarted","Data":"3304b2670ef009e518c7c1412509b892fc6a7d2004b656c5ebd02e97d12f7113"} Feb 19 10:28:04 crc kubenswrapper[4811]: I0219 10:28:04.494268 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b895b5785-bwcm2" event={"ID":"82bf6795-28db-4abd-923a-9706c1ca71d5","Type":"ContainerStarted","Data":"5c977dc01fb2cab06d6ad8735ee551841d4d554e93d29dd494832cbd6e26aa15"} Feb 19 10:28:04 crc kubenswrapper[4811]: I0219 10:28:04.823875 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 10:28:04 crc kubenswrapper[4811]: I0219 10:28:04.824773 4811 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.105549 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6566bc4c4b-lsrcb"] Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.108124 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6566bc4c4b-lsrcb" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.112078 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.122308 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.151116 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6566bc4c4b-lsrcb"] Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.296775 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7km2f\" (UniqueName: \"kubernetes.io/projected/39389157-f52c-4caa-85a5-e2306e903180-kube-api-access-7km2f\") pod \"barbican-api-6566bc4c4b-lsrcb\" (UID: \"39389157-f52c-4caa-85a5-e2306e903180\") " pod="openstack/barbican-api-6566bc4c4b-lsrcb" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.297327 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39389157-f52c-4caa-85a5-e2306e903180-combined-ca-bundle\") pod \"barbican-api-6566bc4c4b-lsrcb\" (UID: \"39389157-f52c-4caa-85a5-e2306e903180\") " pod="openstack/barbican-api-6566bc4c4b-lsrcb" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.297562 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39389157-f52c-4caa-85a5-e2306e903180-config-data\") pod \"barbican-api-6566bc4c4b-lsrcb\" (UID: \"39389157-f52c-4caa-85a5-e2306e903180\") " pod="openstack/barbican-api-6566bc4c4b-lsrcb" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.297778 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39389157-f52c-4caa-85a5-e2306e903180-public-tls-certs\") pod \"barbican-api-6566bc4c4b-lsrcb\" (UID: \"39389157-f52c-4caa-85a5-e2306e903180\") " pod="openstack/barbican-api-6566bc4c4b-lsrcb" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.297836 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39389157-f52c-4caa-85a5-e2306e903180-config-data-custom\") pod \"barbican-api-6566bc4c4b-lsrcb\" (UID: \"39389157-f52c-4caa-85a5-e2306e903180\") " pod="openstack/barbican-api-6566bc4c4b-lsrcb" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.297905 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39389157-f52c-4caa-85a5-e2306e903180-logs\") pod \"barbican-api-6566bc4c4b-lsrcb\" (UID: \"39389157-f52c-4caa-85a5-e2306e903180\") " pod="openstack/barbican-api-6566bc4c4b-lsrcb" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.298012 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39389157-f52c-4caa-85a5-e2306e903180-internal-tls-certs\") pod \"barbican-api-6566bc4c4b-lsrcb\" (UID: \"39389157-f52c-4caa-85a5-e2306e903180\") " pod="openstack/barbican-api-6566bc4c4b-lsrcb" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.409239 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-766dfcbd56-lqbrt" podUID="4ab45e9b-6cfc-48a3-b603-f47d47bbd510" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.144:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.144:8443: connect: connection refused" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.409360 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-766dfcbd56-lqbrt" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.410501 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"e88cbd6cffe02dfbcfdeed3f6b27917bd198051bdb2fc9298ecc4f011b2d45af"} pod="openstack/horizon-766dfcbd56-lqbrt" containerMessage="Container horizon failed startup probe, will be restarted" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.410544 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-766dfcbd56-lqbrt" podUID="4ab45e9b-6cfc-48a3-b603-f47d47bbd510" containerName="horizon" containerID="cri-o://e88cbd6cffe02dfbcfdeed3f6b27917bd198051bdb2fc9298ecc4f011b2d45af" gracePeriod=30 Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.411487 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39389157-f52c-4caa-85a5-e2306e903180-config-data-custom\") pod \"barbican-api-6566bc4c4b-lsrcb\" (UID: \"39389157-f52c-4caa-85a5-e2306e903180\") " pod="openstack/barbican-api-6566bc4c4b-lsrcb" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.416919 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39389157-f52c-4caa-85a5-e2306e903180-logs\") pod \"barbican-api-6566bc4c4b-lsrcb\" (UID: \"39389157-f52c-4caa-85a5-e2306e903180\") " pod="openstack/barbican-api-6566bc4c4b-lsrcb" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.417065 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39389157-f52c-4caa-85a5-e2306e903180-internal-tls-certs\") pod \"barbican-api-6566bc4c4b-lsrcb\" (UID: \"39389157-f52c-4caa-85a5-e2306e903180\") " pod="openstack/barbican-api-6566bc4c4b-lsrcb" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.417305 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7km2f\" (UniqueName: \"kubernetes.io/projected/39389157-f52c-4caa-85a5-e2306e903180-kube-api-access-7km2f\") pod \"barbican-api-6566bc4c4b-lsrcb\" (UID: \"39389157-f52c-4caa-85a5-e2306e903180\") " pod="openstack/barbican-api-6566bc4c4b-lsrcb" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.417398 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39389157-f52c-4caa-85a5-e2306e903180-combined-ca-bundle\") pod \"barbican-api-6566bc4c4b-lsrcb\" (UID: \"39389157-f52c-4caa-85a5-e2306e903180\") " pod="openstack/barbican-api-6566bc4c4b-lsrcb" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.417574 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39389157-f52c-4caa-85a5-e2306e903180-config-data\") pod \"barbican-api-6566bc4c4b-lsrcb\" (UID: \"39389157-f52c-4caa-85a5-e2306e903180\") " pod="openstack/barbican-api-6566bc4c4b-lsrcb" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.417742 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39389157-f52c-4caa-85a5-e2306e903180-public-tls-certs\") pod \"barbican-api-6566bc4c4b-lsrcb\" (UID: \"39389157-f52c-4caa-85a5-e2306e903180\") " pod="openstack/barbican-api-6566bc4c4b-lsrcb" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.418446 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39389157-f52c-4caa-85a5-e2306e903180-logs\") pod \"barbican-api-6566bc4c4b-lsrcb\" (UID: \"39389157-f52c-4caa-85a5-e2306e903180\") " pod="openstack/barbican-api-6566bc4c4b-lsrcb" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.421067 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.429907 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39389157-f52c-4caa-85a5-e2306e903180-public-tls-certs\") pod \"barbican-api-6566bc4c4b-lsrcb\" (UID: \"39389157-f52c-4caa-85a5-e2306e903180\") " pod="openstack/barbican-api-6566bc4c4b-lsrcb" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.433397 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39389157-f52c-4caa-85a5-e2306e903180-config-data-custom\") pod \"barbican-api-6566bc4c4b-lsrcb\" (UID: \"39389157-f52c-4caa-85a5-e2306e903180\") " pod="openstack/barbican-api-6566bc4c4b-lsrcb" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.434503 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39389157-f52c-4caa-85a5-e2306e903180-config-data\") pod \"barbican-api-6566bc4c4b-lsrcb\" (UID: \"39389157-f52c-4caa-85a5-e2306e903180\") " pod="openstack/barbican-api-6566bc4c4b-lsrcb" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.437834 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39389157-f52c-4caa-85a5-e2306e903180-internal-tls-certs\") pod \"barbican-api-6566bc4c4b-lsrcb\" (UID: \"39389157-f52c-4caa-85a5-e2306e903180\") " pod="openstack/barbican-api-6566bc4c4b-lsrcb" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.438537 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39389157-f52c-4caa-85a5-e2306e903180-combined-ca-bundle\") pod \"barbican-api-6566bc4c4b-lsrcb\" (UID: \"39389157-f52c-4caa-85a5-e2306e903180\") " pod="openstack/barbican-api-6566bc4c4b-lsrcb" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.447728 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7km2f\" (UniqueName: \"kubernetes.io/projected/39389157-f52c-4caa-85a5-e2306e903180-kube-api-access-7km2f\") pod \"barbican-api-6566bc4c4b-lsrcb\" (UID: \"39389157-f52c-4caa-85a5-e2306e903180\") " pod="openstack/barbican-api-6566bc4c4b-lsrcb" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.499139 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6566bc4c4b-lsrcb" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.520661 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j97vh\" (UniqueName: \"kubernetes.io/projected/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-kube-api-access-j97vh\") pod \"45fe4f69-83fa-452c-a5fd-c993af1d0ef9\" (UID: \"45fe4f69-83fa-452c-a5fd-c993af1d0ef9\") " Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.520752 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-sg-core-conf-yaml\") pod \"45fe4f69-83fa-452c-a5fd-c993af1d0ef9\" (UID: \"45fe4f69-83fa-452c-a5fd-c993af1d0ef9\") " Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.520813 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-log-httpd\") pod \"45fe4f69-83fa-452c-a5fd-c993af1d0ef9\" (UID: \"45fe4f69-83fa-452c-a5fd-c993af1d0ef9\") " Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.520904 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-config-data\") pod \"45fe4f69-83fa-452c-a5fd-c993af1d0ef9\" (UID: \"45fe4f69-83fa-452c-a5fd-c993af1d0ef9\") " Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.520928 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-scripts\") pod \"45fe4f69-83fa-452c-a5fd-c993af1d0ef9\" (UID: \"45fe4f69-83fa-452c-a5fd-c993af1d0ef9\") " Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.521008 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-combined-ca-bundle\") pod \"45fe4f69-83fa-452c-a5fd-c993af1d0ef9\" (UID: \"45fe4f69-83fa-452c-a5fd-c993af1d0ef9\") " Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.521164 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-run-httpd\") pod \"45fe4f69-83fa-452c-a5fd-c993af1d0ef9\" (UID: \"45fe4f69-83fa-452c-a5fd-c993af1d0ef9\") " Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.522128 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "45fe4f69-83fa-452c-a5fd-c993af1d0ef9" (UID: "45fe4f69-83fa-452c-a5fd-c993af1d0ef9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.522418 4811 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.522812 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "45fe4f69-83fa-452c-a5fd-c993af1d0ef9" (UID: "45fe4f69-83fa-452c-a5fd-c993af1d0ef9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.535550 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-kube-api-access-j97vh" (OuterVolumeSpecName: "kube-api-access-j97vh") pod "45fe4f69-83fa-452c-a5fd-c993af1d0ef9" (UID: "45fe4f69-83fa-452c-a5fd-c993af1d0ef9"). InnerVolumeSpecName "kube-api-access-j97vh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.543011 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-scripts" (OuterVolumeSpecName: "scripts") pod "45fe4f69-83fa-452c-a5fd-c993af1d0ef9" (UID: "45fe4f69-83fa-452c-a5fd-c993af1d0ef9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.577656 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-676dcb6496-tnhmt" podUID="19d17501-8d5f-4f9d-8fad-94f48224e910" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.578122 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-676dcb6496-tnhmt" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.580714 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"3156af09a62658676d66d9a902df8c7b995da32cc4c4fb049d66a9d5276fa9a4"} pod="openstack/horizon-676dcb6496-tnhmt" containerMessage="Container horizon failed startup probe, will be restarted" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.580759 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-676dcb6496-tnhmt" podUID="19d17501-8d5f-4f9d-8fad-94f48224e910" containerName="horizon" containerID="cri-o://3156af09a62658676d66d9a902df8c7b995da32cc4c4fb049d66a9d5276fa9a4" gracePeriod=30 Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.581507 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-l669h" event={"ID":"cda0926f-a0e7-488c-a0a4-6404d2720495","Type":"ContainerStarted","Data":"cf5534ef1905fc7c871595561452bef8c0b08566b95dc9ed9674616deb83c09f"} Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.598715 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.608640 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "45fe4f69-83fa-452c-a5fd-c993af1d0ef9" (UID: "45fe4f69-83fa-452c-a5fd-c993af1d0ef9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.624905 4811 generic.go:334] "Generic (PLEG): container finished" podID="a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f" containerID="23d2b84ba24aea742228eab42ecf5fd2bdbe39819a604fa3819182dc1bb5d929" exitCode=137 Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.625039 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-647dcdfd8f-skgg2" event={"ID":"a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f","Type":"ContainerDied","Data":"23d2b84ba24aea742228eab42ecf5fd2bdbe39819a604fa3819182dc1bb5d929"} Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.581914 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-config-data" (OuterVolumeSpecName: "config-data") pod "45fe4f69-83fa-452c-a5fd-c993af1d0ef9" (UID: "45fe4f69-83fa-452c-a5fd-c993af1d0ef9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.626763 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j97vh\" (UniqueName: \"kubernetes.io/projected/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-kube-api-access-j97vh\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.626786 4811 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.626798 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.626808 4811 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.643297 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45fe4f69-83fa-452c-a5fd-c993af1d0ef9" (UID: "45fe4f69-83fa-452c-a5fd-c993af1d0ef9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.658702 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.666281 4811 generic.go:334] "Generic (PLEG): container finished" podID="dd777ce0-fb21-405b-8619-19318d2057d4" containerID="7f3a208464629bad2499f8d8bd5c3337d9e47eca5699cb3aadf34f1aa0266468" exitCode=137 Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.666315 4811 generic.go:334] "Generic (PLEG): container finished" podID="dd777ce0-fb21-405b-8619-19318d2057d4" containerID="27752c0075ae06c4e0dfe96f1e0a71dfccd33c957ff396423472768ce2e99343" exitCode=137 Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.666358 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-866fcfcb4f-gbvbw" event={"ID":"dd777ce0-fb21-405b-8619-19318d2057d4","Type":"ContainerDied","Data":"7f3a208464629bad2499f8d8bd5c3337d9e47eca5699cb3aadf34f1aa0266468"} Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.666390 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-866fcfcb4f-gbvbw" event={"ID":"dd777ce0-fb21-405b-8619-19318d2057d4","Type":"ContainerDied","Data":"27752c0075ae06c4e0dfe96f1e0a71dfccd33c957ff396423472768ce2e99343"} Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.702767 4811 generic.go:334] "Generic (PLEG): container finished" podID="45fe4f69-83fa-452c-a5fd-c993af1d0ef9" containerID="18a9e2fd451f5340195bb7151c0bee0f554bcef0f59794396c92edc82a588cb5" exitCode=0 Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.702869 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45fe4f69-83fa-452c-a5fd-c993af1d0ef9","Type":"ContainerDied","Data":"18a9e2fd451f5340195bb7151c0bee0f554bcef0f59794396c92edc82a588cb5"} Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.702897 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45fe4f69-83fa-452c-a5fd-c993af1d0ef9","Type":"ContainerDied","Data":"2576d108b93f973486f8ed7c3a5aa81fddecc857c03c5ab1f88410f62f9d3e7c"} Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.702915 4811 scope.go:117] "RemoveContainer" containerID="8c353ca3b78d881375f4a0d811b6e09442da7c08f78b00c1db1d58f0b303f577" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.703060 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.720212 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75675fb4db-4qcj7" event={"ID":"4cb1b23f-5500-410f-af9c-8e47ed4d0250","Type":"ContainerStarted","Data":"669a26d64323bd95297ac199a63c329db830146faa04dfda34bb1fd46c4b1094"} Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.763824 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.763892 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45fe4f69-83fa-452c-a5fd-c993af1d0ef9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.796318 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-75675fb4db-4qcj7" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.796371 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-75675fb4db-4qcj7" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.809012 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56b7c77497-mp97q" event={"ID":"970dbfc5-9f21-4adc-93c3-9d9175826a48","Type":"ContainerDied","Data":"342dbddacb1f02f565b2406119f71b5fc221436913c0e196a103e83b7b6a11da"} Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.809066 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="342dbddacb1f02f565b2406119f71b5fc221436913c0e196a103e83b7b6a11da" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.883099 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-599475cf85-sh7k6" event={"ID":"70197607-b202-4d15-a797-ef5d138c5502","Type":"ContainerStarted","Data":"263657e712378f766eea19a6a94bc202aaad7a1c5dfcda14f0300e8c1a3118bb"} Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.886702 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56b7c77497-mp97q" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.945201 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-75675fb4db-4qcj7" podStartSLOduration=7.94516294 podStartE2EDuration="7.94516294s" podCreationTimestamp="2026-02-19 10:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:28:05.878499137 +0000 UTC m=+1174.404963823" watchObservedRunningTime="2026-02-19 10:28:05.94516294 +0000 UTC m=+1174.471627626" Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.985320 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/970dbfc5-9f21-4adc-93c3-9d9175826a48-combined-ca-bundle\") pod \"970dbfc5-9f21-4adc-93c3-9d9175826a48\" (UID: \"970dbfc5-9f21-4adc-93c3-9d9175826a48\") " Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.985461 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/970dbfc5-9f21-4adc-93c3-9d9175826a48-public-tls-certs\") pod \"970dbfc5-9f21-4adc-93c3-9d9175826a48\" (UID: \"970dbfc5-9f21-4adc-93c3-9d9175826a48\") " Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.985552 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/970dbfc5-9f21-4adc-93c3-9d9175826a48-httpd-config\") pod \"970dbfc5-9f21-4adc-93c3-9d9175826a48\" (UID: \"970dbfc5-9f21-4adc-93c3-9d9175826a48\") " Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.985668 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/970dbfc5-9f21-4adc-93c3-9d9175826a48-config\") pod \"970dbfc5-9f21-4adc-93c3-9d9175826a48\" (UID: \"970dbfc5-9f21-4adc-93c3-9d9175826a48\") " Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.985774 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2w5g\" (UniqueName: \"kubernetes.io/projected/970dbfc5-9f21-4adc-93c3-9d9175826a48-kube-api-access-c2w5g\") pod \"970dbfc5-9f21-4adc-93c3-9d9175826a48\" (UID: \"970dbfc5-9f21-4adc-93c3-9d9175826a48\") " Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.985829 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/970dbfc5-9f21-4adc-93c3-9d9175826a48-internal-tls-certs\") pod \"970dbfc5-9f21-4adc-93c3-9d9175826a48\" (UID: \"970dbfc5-9f21-4adc-93c3-9d9175826a48\") " Feb 19 10:28:05 crc kubenswrapper[4811]: I0219 10:28:05.985939 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/970dbfc5-9f21-4adc-93c3-9d9175826a48-ovndb-tls-certs\") pod \"970dbfc5-9f21-4adc-93c3-9d9175826a48\" (UID: \"970dbfc5-9f21-4adc-93c3-9d9175826a48\") " Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.106474 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/970dbfc5-9f21-4adc-93c3-9d9175826a48-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "970dbfc5-9f21-4adc-93c3-9d9175826a48" (UID: "970dbfc5-9f21-4adc-93c3-9d9175826a48"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.107849 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/970dbfc5-9f21-4adc-93c3-9d9175826a48-kube-api-access-c2w5g" (OuterVolumeSpecName: "kube-api-access-c2w5g") pod "970dbfc5-9f21-4adc-93c3-9d9175826a48" (UID: "970dbfc5-9f21-4adc-93c3-9d9175826a48"). InnerVolumeSpecName "kube-api-access-c2w5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.147003 4811 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/970dbfc5-9f21-4adc-93c3-9d9175826a48-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.147049 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2w5g\" (UniqueName: \"kubernetes.io/projected/970dbfc5-9f21-4adc-93c3-9d9175826a48-kube-api-access-c2w5g\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.147762 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.220730 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.321151 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:28:06 crc kubenswrapper[4811]: E0219 10:28:06.321885 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45fe4f69-83fa-452c-a5fd-c993af1d0ef9" containerName="ceilometer-notification-agent" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.321911 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="45fe4f69-83fa-452c-a5fd-c993af1d0ef9" containerName="ceilometer-notification-agent" Feb 19 10:28:06 crc kubenswrapper[4811]: E0219 10:28:06.321935 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45fe4f69-83fa-452c-a5fd-c993af1d0ef9" containerName="sg-core" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.321946 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="45fe4f69-83fa-452c-a5fd-c993af1d0ef9" containerName="sg-core" Feb 19 10:28:06 crc kubenswrapper[4811]: E0219 10:28:06.321976 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="970dbfc5-9f21-4adc-93c3-9d9175826a48" containerName="neutron-api" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.321986 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="970dbfc5-9f21-4adc-93c3-9d9175826a48" containerName="neutron-api" Feb 19 10:28:06 crc kubenswrapper[4811]: E0219 10:28:06.322012 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="970dbfc5-9f21-4adc-93c3-9d9175826a48" containerName="neutron-httpd" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.322020 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="970dbfc5-9f21-4adc-93c3-9d9175826a48" containerName="neutron-httpd" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.322256 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="45fe4f69-83fa-452c-a5fd-c993af1d0ef9" containerName="sg-core" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.322278 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="45fe4f69-83fa-452c-a5fd-c993af1d0ef9" containerName="ceilometer-notification-agent" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.322294 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="970dbfc5-9f21-4adc-93c3-9d9175826a48" containerName="neutron-httpd" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.322305 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="970dbfc5-9f21-4adc-93c3-9d9175826a48" containerName="neutron-api" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.326140 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.332061 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.332396 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.359669 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba\") " pod="openstack/ceilometer-0" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.359764 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-scripts\") pod \"ceilometer-0\" (UID: \"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba\") " pod="openstack/ceilometer-0" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.359823 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba\") " pod="openstack/ceilometer-0" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.359910 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bt5h\" (UniqueName: \"kubernetes.io/projected/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-kube-api-access-6bt5h\") pod \"ceilometer-0\" (UID: \"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba\") " pod="openstack/ceilometer-0" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.359945 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-log-httpd\") pod \"ceilometer-0\" (UID: \"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba\") " pod="openstack/ceilometer-0" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.360004 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-config-data\") pod \"ceilometer-0\" (UID: \"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba\") " pod="openstack/ceilometer-0" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.360038 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-run-httpd\") pod \"ceilometer-0\" (UID: \"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba\") " pod="openstack/ceilometer-0" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.379467 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.462262 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-scripts\") pod \"ceilometer-0\" (UID: \"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba\") " pod="openstack/ceilometer-0" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.462363 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba\") " pod="openstack/ceilometer-0" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.462439 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bt5h\" (UniqueName: \"kubernetes.io/projected/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-kube-api-access-6bt5h\") pod \"ceilometer-0\" (UID: \"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba\") " pod="openstack/ceilometer-0" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.462592 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-log-httpd\") pod \"ceilometer-0\" (UID: \"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba\") " pod="openstack/ceilometer-0" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.462635 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-config-data\") pod \"ceilometer-0\" (UID: \"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba\") " pod="openstack/ceilometer-0" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.462659 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-run-httpd\") pod \"ceilometer-0\" (UID: \"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba\") " pod="openstack/ceilometer-0" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.462718 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba\") " pod="openstack/ceilometer-0" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.468612 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-run-httpd\") pod \"ceilometer-0\" (UID: \"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba\") " pod="openstack/ceilometer-0" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.469161 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba\") " pod="openstack/ceilometer-0" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.469177 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-log-httpd\") pod \"ceilometer-0\" (UID: \"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba\") " pod="openstack/ceilometer-0" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.470792 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-scripts\") pod \"ceilometer-0\" (UID: \"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba\") " pod="openstack/ceilometer-0" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.478797 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-config-data\") pod \"ceilometer-0\" (UID: \"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba\") " pod="openstack/ceilometer-0" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.485315 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba\") " pod="openstack/ceilometer-0" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.503795 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bt5h\" (UniqueName: \"kubernetes.io/projected/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-kube-api-access-6bt5h\") pod \"ceilometer-0\" (UID: \"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba\") " pod="openstack/ceilometer-0" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.533226 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/970dbfc5-9f21-4adc-93c3-9d9175826a48-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "970dbfc5-9f21-4adc-93c3-9d9175826a48" (UID: "970dbfc5-9f21-4adc-93c3-9d9175826a48"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.546637 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/970dbfc5-9f21-4adc-93c3-9d9175826a48-config" (OuterVolumeSpecName: "config") pod "970dbfc5-9f21-4adc-93c3-9d9175826a48" (UID: "970dbfc5-9f21-4adc-93c3-9d9175826a48"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.558614 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/970dbfc5-9f21-4adc-93c3-9d9175826a48-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "970dbfc5-9f21-4adc-93c3-9d9175826a48" (UID: "970dbfc5-9f21-4adc-93c3-9d9175826a48"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.571092 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/970dbfc5-9f21-4adc-93c3-9d9175826a48-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.571144 4811 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/970dbfc5-9f21-4adc-93c3-9d9175826a48-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.571163 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/970dbfc5-9f21-4adc-93c3-9d9175826a48-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.617884 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/970dbfc5-9f21-4adc-93c3-9d9175826a48-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "970dbfc5-9f21-4adc-93c3-9d9175826a48" (UID: "970dbfc5-9f21-4adc-93c3-9d9175826a48"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.626530 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/970dbfc5-9f21-4adc-93c3-9d9175826a48-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "970dbfc5-9f21-4adc-93c3-9d9175826a48" (UID: "970dbfc5-9f21-4adc-93c3-9d9175826a48"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.651867 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45fe4f69-83fa-452c-a5fd-c993af1d0ef9" path="/var/lib/kubelet/pods/45fe4f69-83fa-452c-a5fd-c993af1d0ef9/volumes" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.674762 4811 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/970dbfc5-9f21-4adc-93c3-9d9175826a48-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.674788 4811 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/970dbfc5-9f21-4adc-93c3-9d9175826a48-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.740938 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.964555 4811 generic.go:334] "Generic (PLEG): container finished" podID="82bf6795-28db-4abd-923a-9706c1ca71d5" containerID="77c40bff07cdf39c46ac581229e43add0bfb4df702288c67514a5420de407099" exitCode=0 Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.964705 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b895b5785-bwcm2" event={"ID":"82bf6795-28db-4abd-923a-9706c1ca71d5","Type":"ContainerDied","Data":"77c40bff07cdf39c46ac581229e43add0bfb4df702288c67514a5420de407099"} Feb 19 10:28:06 crc kubenswrapper[4811]: I0219 10:28:06.968154 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"918b0763-de1c-49fa-8ade-2410fb2e1af3","Type":"ContainerStarted","Data":"803dde998d1dac74a7881d56b130a1eec1ec2e2c13549cbffee678d901cb1dc8"} Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.006009 4811 generic.go:334] "Generic (PLEG): container finished" podID="cda0926f-a0e7-488c-a0a4-6404d2720495" containerID="cf5534ef1905fc7c871595561452bef8c0b08566b95dc9ed9674616deb83c09f" exitCode=0 Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.006573 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-l669h" event={"ID":"cda0926f-a0e7-488c-a0a4-6404d2720495","Type":"ContainerDied","Data":"cf5534ef1905fc7c871595561452bef8c0b08566b95dc9ed9674616deb83c09f"} Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.015965 4811 generic.go:334] "Generic (PLEG): container finished" podID="a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f" containerID="bd44d6e798942ef5235730eadb2917eab2b0dd23d35aaca76621948a90248aee" exitCode=137 Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.016103 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56b7c77497-mp97q" Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.016578 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-647dcdfd8f-skgg2" event={"ID":"a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f","Type":"ContainerDied","Data":"bd44d6e798942ef5235730eadb2917eab2b0dd23d35aaca76621948a90248aee"} Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.085913 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-56b7c77497-mp97q"] Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.100846 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-56b7c77497-mp97q"] Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.431960 4811 scope.go:117] "RemoveContainer" containerID="18a9e2fd451f5340195bb7151c0bee0f554bcef0f59794396c92edc82a588cb5" Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.692415 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-866fcfcb4f-gbvbw" Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.705067 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-647dcdfd8f-skgg2" Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.710817 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dd777ce0-fb21-405b-8619-19318d2057d4-scripts\") pod \"dd777ce0-fb21-405b-8619-19318d2057d4\" (UID: \"dd777ce0-fb21-405b-8619-19318d2057d4\") " Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.710959 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dd777ce0-fb21-405b-8619-19318d2057d4-horizon-secret-key\") pod \"dd777ce0-fb21-405b-8619-19318d2057d4\" (UID: \"dd777ce0-fb21-405b-8619-19318d2057d4\") " Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.710991 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp5bw\" (UniqueName: \"kubernetes.io/projected/dd777ce0-fb21-405b-8619-19318d2057d4-kube-api-access-pp5bw\") pod \"dd777ce0-fb21-405b-8619-19318d2057d4\" (UID: \"dd777ce0-fb21-405b-8619-19318d2057d4\") " Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.711139 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd777ce0-fb21-405b-8619-19318d2057d4-logs\") pod \"dd777ce0-fb21-405b-8619-19318d2057d4\" (UID: \"dd777ce0-fb21-405b-8619-19318d2057d4\") " Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.711328 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dd777ce0-fb21-405b-8619-19318d2057d4-config-data\") pod \"dd777ce0-fb21-405b-8619-19318d2057d4\" (UID: \"dd777ce0-fb21-405b-8619-19318d2057d4\") " Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.716391 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b895b5785-bwcm2" Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.718776 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd777ce0-fb21-405b-8619-19318d2057d4-logs" (OuterVolumeSpecName: "logs") pod "dd777ce0-fb21-405b-8619-19318d2057d4" (UID: "dd777ce0-fb21-405b-8619-19318d2057d4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.744528 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd777ce0-fb21-405b-8619-19318d2057d4-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "dd777ce0-fb21-405b-8619-19318d2057d4" (UID: "dd777ce0-fb21-405b-8619-19318d2057d4"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.744874 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd777ce0-fb21-405b-8619-19318d2057d4-kube-api-access-pp5bw" (OuterVolumeSpecName: "kube-api-access-pp5bw") pod "dd777ce0-fb21-405b-8619-19318d2057d4" (UID: "dd777ce0-fb21-405b-8619-19318d2057d4"). InnerVolumeSpecName "kube-api-access-pp5bw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.811698 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd777ce0-fb21-405b-8619-19318d2057d4-config-data" (OuterVolumeSpecName: "config-data") pod "dd777ce0-fb21-405b-8619-19318d2057d4" (UID: "dd777ce0-fb21-405b-8619-19318d2057d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.816310 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82bf6795-28db-4abd-923a-9706c1ca71d5-ovsdbserver-sb\") pod \"82bf6795-28db-4abd-923a-9706c1ca71d5\" (UID: \"82bf6795-28db-4abd-923a-9706c1ca71d5\") " Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.816412 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f-horizon-secret-key\") pod \"a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f\" (UID: \"a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f\") " Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.816677 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82bf6795-28db-4abd-923a-9706c1ca71d5-config\") pod \"82bf6795-28db-4abd-923a-9706c1ca71d5\" (UID: \"82bf6795-28db-4abd-923a-9706c1ca71d5\") " Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.816758 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82bf6795-28db-4abd-923a-9706c1ca71d5-dns-swift-storage-0\") pod \"82bf6795-28db-4abd-923a-9706c1ca71d5\" (UID: \"82bf6795-28db-4abd-923a-9706c1ca71d5\") " Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.816934 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f-logs\") pod \"a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f\" (UID: \"a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f\") " Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.817091 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f-config-data\") pod \"a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f\" (UID: \"a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f\") " Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.817139 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f-scripts\") pod \"a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f\" (UID: \"a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f\") " Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.817165 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwlwp\" (UniqueName: \"kubernetes.io/projected/82bf6795-28db-4abd-923a-9706c1ca71d5-kube-api-access-vwlwp\") pod \"82bf6795-28db-4abd-923a-9706c1ca71d5\" (UID: \"82bf6795-28db-4abd-923a-9706c1ca71d5\") " Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.817330 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82bf6795-28db-4abd-923a-9706c1ca71d5-dns-svc\") pod \"82bf6795-28db-4abd-923a-9706c1ca71d5\" (UID: \"82bf6795-28db-4abd-923a-9706c1ca71d5\") " Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.817408 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82bf6795-28db-4abd-923a-9706c1ca71d5-ovsdbserver-nb\") pod \"82bf6795-28db-4abd-923a-9706c1ca71d5\" (UID: \"82bf6795-28db-4abd-923a-9706c1ca71d5\") " Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.817477 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tp9d\" (UniqueName: \"kubernetes.io/projected/a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f-kube-api-access-6tp9d\") pod \"a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f\" (UID: \"a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f\") " Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.818897 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f-logs" (OuterVolumeSpecName: "logs") pod "a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f" (UID: "a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.820238 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd777ce0-fb21-405b-8619-19318d2057d4-scripts" (OuterVolumeSpecName: "scripts") pod "dd777ce0-fb21-405b-8619-19318d2057d4" (UID: "dd777ce0-fb21-405b-8619-19318d2057d4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.822292 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.822322 4811 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dd777ce0-fb21-405b-8619-19318d2057d4-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.822428 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp5bw\" (UniqueName: \"kubernetes.io/projected/dd777ce0-fb21-405b-8619-19318d2057d4-kube-api-access-pp5bw\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.822442 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd777ce0-fb21-405b-8619-19318d2057d4-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.822572 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dd777ce0-fb21-405b-8619-19318d2057d4-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.822586 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dd777ce0-fb21-405b-8619-19318d2057d4-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.827135 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82bf6795-28db-4abd-923a-9706c1ca71d5-kube-api-access-vwlwp" (OuterVolumeSpecName: "kube-api-access-vwlwp") pod "82bf6795-28db-4abd-923a-9706c1ca71d5" (UID: "82bf6795-28db-4abd-923a-9706c1ca71d5"). InnerVolumeSpecName "kube-api-access-vwlwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.843830 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f-kube-api-access-6tp9d" (OuterVolumeSpecName: "kube-api-access-6tp9d") pod "a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f" (UID: "a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f"). InnerVolumeSpecName "kube-api-access-6tp9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.844109 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f" (UID: "a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.855528 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82bf6795-28db-4abd-923a-9706c1ca71d5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "82bf6795-28db-4abd-923a-9706c1ca71d5" (UID: "82bf6795-28db-4abd-923a-9706c1ca71d5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.886601 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82bf6795-28db-4abd-923a-9706c1ca71d5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "82bf6795-28db-4abd-923a-9706c1ca71d5" (UID: "82bf6795-28db-4abd-923a-9706c1ca71d5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.906708 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82bf6795-28db-4abd-923a-9706c1ca71d5-config" (OuterVolumeSpecName: "config") pod "82bf6795-28db-4abd-923a-9706c1ca71d5" (UID: "82bf6795-28db-4abd-923a-9706c1ca71d5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.915405 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f-scripts" (OuterVolumeSpecName: "scripts") pod "a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f" (UID: "a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.919869 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82bf6795-28db-4abd-923a-9706c1ca71d5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "82bf6795-28db-4abd-923a-9706c1ca71d5" (UID: "82bf6795-28db-4abd-923a-9706c1ca71d5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.922346 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82bf6795-28db-4abd-923a-9706c1ca71d5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "82bf6795-28db-4abd-923a-9706c1ca71d5" (UID: "82bf6795-28db-4abd-923a-9706c1ca71d5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.924584 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82bf6795-28db-4abd-923a-9706c1ca71d5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.924621 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82bf6795-28db-4abd-923a-9706c1ca71d5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.924633 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tp9d\" (UniqueName: \"kubernetes.io/projected/a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f-kube-api-access-6tp9d\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.924646 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82bf6795-28db-4abd-923a-9706c1ca71d5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.924654 4811 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.924662 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82bf6795-28db-4abd-923a-9706c1ca71d5-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.924671 4811 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82bf6795-28db-4abd-923a-9706c1ca71d5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.924679 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.924687 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwlwp\" (UniqueName: \"kubernetes.io/projected/82bf6795-28db-4abd-923a-9706c1ca71d5-kube-api-access-vwlwp\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:07 crc kubenswrapper[4811]: I0219 10:28:07.947851 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f-config-data" (OuterVolumeSpecName: "config-data") pod "a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f" (UID: "a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:28:08 crc kubenswrapper[4811]: I0219 10:28:08.027342 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:08 crc kubenswrapper[4811]: I0219 10:28:08.075300 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-866fcfcb4f-gbvbw" event={"ID":"dd777ce0-fb21-405b-8619-19318d2057d4","Type":"ContainerDied","Data":"b8018ca1cbd644b6b1dd9a80d5fcd970edbf2e73fe85467ef8694b6e80522fdd"} Feb 19 10:28:08 crc kubenswrapper[4811]: I0219 10:28:08.075467 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-866fcfcb4f-gbvbw" Feb 19 10:28:08 crc kubenswrapper[4811]: I0219 10:28:08.081836 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5e15f023-b95a-479c-a3b5-902a4b77ff7a","Type":"ContainerStarted","Data":"2030cafd797152262c9791e7f19e3c59f8979f1dc37e03fef2c2a2214d4d1a3b"} Feb 19 10:28:08 crc kubenswrapper[4811]: I0219 10:28:08.088264 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b895b5785-bwcm2" event={"ID":"82bf6795-28db-4abd-923a-9706c1ca71d5","Type":"ContainerDied","Data":"5c977dc01fb2cab06d6ad8735ee551841d4d554e93d29dd494832cbd6e26aa15"} Feb 19 10:28:08 crc kubenswrapper[4811]: I0219 10:28:08.088387 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b895b5785-bwcm2" Feb 19 10:28:08 crc kubenswrapper[4811]: I0219 10:28:08.099633 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-647dcdfd8f-skgg2" Feb 19 10:28:08 crc kubenswrapper[4811]: I0219 10:28:08.100093 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-647dcdfd8f-skgg2" event={"ID":"a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f","Type":"ContainerDied","Data":"4dafc9b81d6782cf3f9b8df0039e4e8ad12e5ca3e9f5a4f7dd3f6cfb82ee4a21"} Feb 19 10:28:08 crc kubenswrapper[4811]: I0219 10:28:08.156829 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-866fcfcb4f-gbvbw"] Feb 19 10:28:08 crc kubenswrapper[4811]: I0219 10:28:08.188171 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-866fcfcb4f-gbvbw"] Feb 19 10:28:08 crc kubenswrapper[4811]: I0219 10:28:08.221832 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b895b5785-bwcm2"] Feb 19 10:28:08 crc kubenswrapper[4811]: I0219 10:28:08.235585 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b895b5785-bwcm2"] Feb 19 10:28:08 crc kubenswrapper[4811]: I0219 10:28:08.253030 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-647dcdfd8f-skgg2"] Feb 19 10:28:08 crc kubenswrapper[4811]: I0219 10:28:08.264314 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-647dcdfd8f-skgg2"] Feb 19 10:28:08 crc kubenswrapper[4811]: I0219 10:28:08.492083 4811 scope.go:117] "RemoveContainer" containerID="8c353ca3b78d881375f4a0d811b6e09442da7c08f78b00c1db1d58f0b303f577" Feb 19 10:28:08 crc kubenswrapper[4811]: E0219 10:28:08.492691 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c353ca3b78d881375f4a0d811b6e09442da7c08f78b00c1db1d58f0b303f577\": container with ID starting with 8c353ca3b78d881375f4a0d811b6e09442da7c08f78b00c1db1d58f0b303f577 not found: ID does not exist" containerID="8c353ca3b78d881375f4a0d811b6e09442da7c08f78b00c1db1d58f0b303f577" Feb 19 10:28:08 crc kubenswrapper[4811]: I0219 10:28:08.492777 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c353ca3b78d881375f4a0d811b6e09442da7c08f78b00c1db1d58f0b303f577"} err="failed to get container status \"8c353ca3b78d881375f4a0d811b6e09442da7c08f78b00c1db1d58f0b303f577\": rpc error: code = NotFound desc = could not find container \"8c353ca3b78d881375f4a0d811b6e09442da7c08f78b00c1db1d58f0b303f577\": container with ID starting with 8c353ca3b78d881375f4a0d811b6e09442da7c08f78b00c1db1d58f0b303f577 not found: ID does not exist" Feb 19 10:28:08 crc kubenswrapper[4811]: I0219 10:28:08.492831 4811 scope.go:117] "RemoveContainer" containerID="18a9e2fd451f5340195bb7151c0bee0f554bcef0f59794396c92edc82a588cb5" Feb 19 10:28:08 crc kubenswrapper[4811]: E0219 10:28:08.496958 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18a9e2fd451f5340195bb7151c0bee0f554bcef0f59794396c92edc82a588cb5\": container with ID starting with 18a9e2fd451f5340195bb7151c0bee0f554bcef0f59794396c92edc82a588cb5 not found: ID does not exist" containerID="18a9e2fd451f5340195bb7151c0bee0f554bcef0f59794396c92edc82a588cb5" Feb 19 10:28:08 crc kubenswrapper[4811]: I0219 10:28:08.496999 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18a9e2fd451f5340195bb7151c0bee0f554bcef0f59794396c92edc82a588cb5"} err="failed to get container status \"18a9e2fd451f5340195bb7151c0bee0f554bcef0f59794396c92edc82a588cb5\": rpc error: code = NotFound desc = could not find container \"18a9e2fd451f5340195bb7151c0bee0f554bcef0f59794396c92edc82a588cb5\": container with ID starting with 18a9e2fd451f5340195bb7151c0bee0f554bcef0f59794396c92edc82a588cb5 not found: ID does not exist" Feb 19 10:28:08 crc kubenswrapper[4811]: I0219 10:28:08.497027 4811 scope.go:117] "RemoveContainer" containerID="7f3a208464629bad2499f8d8bd5c3337d9e47eca5699cb3aadf34f1aa0266468" Feb 19 10:28:08 crc kubenswrapper[4811]: I0219 10:28:08.645120 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82bf6795-28db-4abd-923a-9706c1ca71d5" path="/var/lib/kubelet/pods/82bf6795-28db-4abd-923a-9706c1ca71d5/volumes" Feb 19 10:28:08 crc kubenswrapper[4811]: I0219 10:28:08.646112 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="970dbfc5-9f21-4adc-93c3-9d9175826a48" path="/var/lib/kubelet/pods/970dbfc5-9f21-4adc-93c3-9d9175826a48/volumes" Feb 19 10:28:08 crc kubenswrapper[4811]: I0219 10:28:08.659521 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f" path="/var/lib/kubelet/pods/a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f/volumes" Feb 19 10:28:08 crc kubenswrapper[4811]: I0219 10:28:08.660258 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd777ce0-fb21-405b-8619-19318d2057d4" path="/var/lib/kubelet/pods/dd777ce0-fb21-405b-8619-19318d2057d4/volumes" Feb 19 10:28:08 crc kubenswrapper[4811]: I0219 10:28:08.778952 4811 scope.go:117] "RemoveContainer" containerID="27752c0075ae06c4e0dfe96f1e0a71dfccd33c957ff396423472768ce2e99343" Feb 19 10:28:09 crc kubenswrapper[4811]: I0219 10:28:09.100391 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6566bc4c4b-lsrcb"] Feb 19 10:28:09 crc kubenswrapper[4811]: I0219 10:28:09.125232 4811 scope.go:117] "RemoveContainer" containerID="77c40bff07cdf39c46ac581229e43add0bfb4df702288c67514a5420de407099" Feb 19 10:28:09 crc kubenswrapper[4811]: I0219 10:28:09.164834 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-599475cf85-sh7k6" event={"ID":"70197607-b202-4d15-a797-ef5d138c5502","Type":"ContainerStarted","Data":"8e30b5a1996c1f91039180561f24e4a0ed48745ac3c41817a4ea13314d59eefd"} Feb 19 10:28:09 crc kubenswrapper[4811]: I0219 10:28:09.165157 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-599475cf85-sh7k6" Feb 19 10:28:09 crc kubenswrapper[4811]: I0219 10:28:09.196840 4811 scope.go:117] "RemoveContainer" containerID="bd44d6e798942ef5235730eadb2917eab2b0dd23d35aaca76621948a90248aee" Feb 19 10:28:09 crc kubenswrapper[4811]: I0219 10:28:09.249824 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-599475cf85-sh7k6" podStartSLOduration=7.249798695 podStartE2EDuration="7.249798695s" podCreationTimestamp="2026-02-19 10:28:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:28:09.194890953 +0000 UTC m=+1177.721355649" watchObservedRunningTime="2026-02-19 10:28:09.249798695 +0000 UTC m=+1177.776263381" Feb 19 10:28:09 crc kubenswrapper[4811]: I0219 10:28:09.265431 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:28:09 crc kubenswrapper[4811]: I0219 10:28:09.622580 4811 scope.go:117] "RemoveContainer" containerID="23d2b84ba24aea742228eab42ecf5fd2bdbe39819a604fa3819182dc1bb5d929" Feb 19 10:28:10 crc kubenswrapper[4811]: I0219 10:28:10.159033 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-75675fb4db-4qcj7" Feb 19 10:28:10 crc kubenswrapper[4811]: I0219 10:28:10.225321 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-l669h" event={"ID":"cda0926f-a0e7-488c-a0a4-6404d2720495","Type":"ContainerStarted","Data":"51223e430eb4ec34dd516f700de9104a9c7f63d43357c8d74cd097a88adfa28a"} Feb 19 10:28:10 crc kubenswrapper[4811]: I0219 10:28:10.226850 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-l669h" Feb 19 10:28:10 crc kubenswrapper[4811]: I0219 10:28:10.245606 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5e15f023-b95a-479c-a3b5-902a4b77ff7a","Type":"ContainerStarted","Data":"68aaeffb79e16bc81f2350cf738048350a83903c3b7091b09ece0f92d711b925"} Feb 19 10:28:10 crc kubenswrapper[4811]: I0219 10:28:10.256531 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-594c4b4cb6-89gd8" event={"ID":"51d8831e-3334-4e4a-84b1-98912b1dc6f2","Type":"ContainerStarted","Data":"7198b284f8885fcd07dae779baa25a6460031d1ce8325810fae9f17a1c839ac6"} Feb 19 10:28:10 crc kubenswrapper[4811]: I0219 10:28:10.256578 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-594c4b4cb6-89gd8" event={"ID":"51d8831e-3334-4e4a-84b1-98912b1dc6f2","Type":"ContainerStarted","Data":"edf48f785d849490f4f5ea04658838f109e861244c8e8089cd437f1750fbc8c0"} Feb 19 10:28:10 crc kubenswrapper[4811]: I0219 10:28:10.258075 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-l669h" podStartSLOduration=12.258061762 podStartE2EDuration="12.258061762s" podCreationTimestamp="2026-02-19 10:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:28:10.249094743 +0000 UTC m=+1178.775559429" watchObservedRunningTime="2026-02-19 10:28:10.258061762 +0000 UTC m=+1178.784526448" Feb 19 10:28:10 crc kubenswrapper[4811]: I0219 10:28:10.270195 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba","Type":"ContainerStarted","Data":"e1555bf3831f6e08de2830d89cfa4962ed2b5255111a6afc75da4848fea5d9b2"} Feb 19 10:28:10 crc kubenswrapper[4811]: I0219 10:28:10.280920 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=11.791217511 podStartE2EDuration="13.280894685s" podCreationTimestamp="2026-02-19 10:27:57 +0000 UTC" firstStartedPulling="2026-02-19 10:28:03.24526033 +0000 UTC m=+1171.771725016" lastFinishedPulling="2026-02-19 10:28:04.734937504 +0000 UTC m=+1173.261402190" observedRunningTime="2026-02-19 10:28:10.273286671 +0000 UTC m=+1178.799751357" watchObservedRunningTime="2026-02-19 10:28:10.280894685 +0000 UTC m=+1178.807359381" Feb 19 10:28:10 crc kubenswrapper[4811]: I0219 10:28:10.290312 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7cc994bfc5-bf96r" event={"ID":"20451628-f7b0-43b3-b0b9-4ab828db8a77","Type":"ContainerStarted","Data":"cec006144e10fb5daffacb66027733b635c65530caf64ddf604991cc6ec8cfd5"} Feb 19 10:28:10 crc kubenswrapper[4811]: I0219 10:28:10.290387 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7cc994bfc5-bf96r" event={"ID":"20451628-f7b0-43b3-b0b9-4ab828db8a77","Type":"ContainerStarted","Data":"224578e494312f45d90cf9df7d65e74b20d34f9cfd9d40070affaee451dea917"} Feb 19 10:28:10 crc kubenswrapper[4811]: I0219 10:28:10.320774 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6566bc4c4b-lsrcb" event={"ID":"39389157-f52c-4caa-85a5-e2306e903180","Type":"ContainerStarted","Data":"38b8c603b0ab6aa5ef1d8250697417cff2bf3f56e2da97fcfa8681be3a5a56bf"} Feb 19 10:28:10 crc kubenswrapper[4811]: I0219 10:28:10.320821 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6566bc4c4b-lsrcb" event={"ID":"39389157-f52c-4caa-85a5-e2306e903180","Type":"ContainerStarted","Data":"3610612470c4da638e9ee9b51b2e2561207163034da82827d8a32387f64e32c6"} Feb 19 10:28:10 crc kubenswrapper[4811]: I0219 10:28:10.336773 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7cc994bfc5-bf96r" podStartSLOduration=6.960685811 podStartE2EDuration="12.336757481s" podCreationTimestamp="2026-02-19 10:27:58 +0000 UTC" firstStartedPulling="2026-02-19 10:28:03.162411745 +0000 UTC m=+1171.688876431" lastFinishedPulling="2026-02-19 10:28:08.538483405 +0000 UTC m=+1177.064948101" observedRunningTime="2026-02-19 10:28:10.334811261 +0000 UTC m=+1178.861275947" watchObservedRunningTime="2026-02-19 10:28:10.336757481 +0000 UTC m=+1178.863222167" Feb 19 10:28:10 crc kubenswrapper[4811]: I0219 10:28:10.381146 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-594c4b4cb6-89gd8" podStartSLOduration=7.6803384139999995 podStartE2EDuration="12.381108394s" podCreationTimestamp="2026-02-19 10:27:58 +0000 UTC" firstStartedPulling="2026-02-19 10:28:03.839525591 +0000 UTC m=+1172.365990277" lastFinishedPulling="2026-02-19 10:28:08.540295571 +0000 UTC m=+1177.066760257" observedRunningTime="2026-02-19 10:28:10.307471844 +0000 UTC m=+1178.833936530" watchObservedRunningTime="2026-02-19 10:28:10.381108394 +0000 UTC m=+1178.907573080" Feb 19 10:28:10 crc kubenswrapper[4811]: I0219 10:28:10.413939 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"918b0763-de1c-49fa-8ade-2410fb2e1af3","Type":"ContainerStarted","Data":"80fe05657dcae2b82e7e2a7cd50c3104437b9b2d12bbcaf2dc56952a4c107c0e"} Feb 19 10:28:10 crc kubenswrapper[4811]: I0219 10:28:10.413901 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="918b0763-de1c-49fa-8ade-2410fb2e1af3" containerName="cinder-api-log" containerID="cri-o://803dde998d1dac74a7881d56b130a1eec1ec2e2c13549cbffee678d901cb1dc8" gracePeriod=30 Feb 19 10:28:10 crc kubenswrapper[4811]: I0219 10:28:10.414600 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="918b0763-de1c-49fa-8ade-2410fb2e1af3" containerName="cinder-api" containerID="cri-o://80fe05657dcae2b82e7e2a7cd50c3104437b9b2d12bbcaf2dc56952a4c107c0e" gracePeriod=30 Feb 19 10:28:10 crc kubenswrapper[4811]: I0219 10:28:10.414860 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 19 10:28:10 crc kubenswrapper[4811]: I0219 10:28:10.493868 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=13.493816433 podStartE2EDuration="13.493816433s" podCreationTimestamp="2026-02-19 10:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:28:10.479726223 +0000 UTC m=+1179.006190909" watchObservedRunningTime="2026-02-19 10:28:10.493816433 +0000 UTC m=+1179.020281119" Feb 19 10:28:10 crc kubenswrapper[4811]: E0219 10:28:10.703902 4811 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod918b0763_de1c_49fa_8ade_2410fb2e1af3.slice/crio-conmon-803dde998d1dac74a7881d56b130a1eec1ec2e2c13549cbffee678d901cb1dc8.scope\": RecentStats: unable to find data in memory cache]" Feb 19 10:28:11 crc kubenswrapper[4811]: I0219 10:28:11.478703 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba","Type":"ContainerStarted","Data":"7d68b6f2f543e678bc73abea5afe62d8dc40cd4028d176580ac1b1c6ed41f325"} Feb 19 10:28:11 crc kubenswrapper[4811]: I0219 10:28:11.511945 4811 generic.go:334] "Generic (PLEG): container finished" podID="918b0763-de1c-49fa-8ade-2410fb2e1af3" containerID="80fe05657dcae2b82e7e2a7cd50c3104437b9b2d12bbcaf2dc56952a4c107c0e" exitCode=0 Feb 19 10:28:11 crc kubenswrapper[4811]: I0219 10:28:11.512496 4811 generic.go:334] "Generic (PLEG): container finished" podID="918b0763-de1c-49fa-8ade-2410fb2e1af3" containerID="803dde998d1dac74a7881d56b130a1eec1ec2e2c13549cbffee678d901cb1dc8" exitCode=143 Feb 19 10:28:11 crc kubenswrapper[4811]: I0219 10:28:11.512768 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"918b0763-de1c-49fa-8ade-2410fb2e1af3","Type":"ContainerDied","Data":"80fe05657dcae2b82e7e2a7cd50c3104437b9b2d12bbcaf2dc56952a4c107c0e"} Feb 19 10:28:11 crc kubenswrapper[4811]: I0219 10:28:11.512927 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"918b0763-de1c-49fa-8ade-2410fb2e1af3","Type":"ContainerDied","Data":"803dde998d1dac74a7881d56b130a1eec1ec2e2c13549cbffee678d901cb1dc8"} Feb 19 10:28:11 crc kubenswrapper[4811]: I0219 10:28:11.575554 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6566bc4c4b-lsrcb" event={"ID":"39389157-f52c-4caa-85a5-e2306e903180","Type":"ContainerStarted","Data":"59391d5d13441b14de5d976c5be080631a00fd3a985250438459479dd1ec29e0"} Feb 19 10:28:11 crc kubenswrapper[4811]: I0219 10:28:11.576812 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6566bc4c4b-lsrcb" Feb 19 10:28:11 crc kubenswrapper[4811]: I0219 10:28:11.578046 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6566bc4c4b-lsrcb" Feb 19 10:28:11 crc kubenswrapper[4811]: I0219 10:28:11.619554 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6566bc4c4b-lsrcb" podStartSLOduration=6.6195208690000005 podStartE2EDuration="6.619520869s" podCreationTimestamp="2026-02-19 10:28:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:28:11.606095786 +0000 UTC m=+1180.132560472" watchObservedRunningTime="2026-02-19 10:28:11.619520869 +0000 UTC m=+1180.145985565" Feb 19 10:28:11 crc kubenswrapper[4811]: I0219 10:28:11.649155 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 10:28:11 crc kubenswrapper[4811]: I0219 10:28:11.741538 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/918b0763-de1c-49fa-8ade-2410fb2e1af3-etc-machine-id\") pod \"918b0763-de1c-49fa-8ade-2410fb2e1af3\" (UID: \"918b0763-de1c-49fa-8ade-2410fb2e1af3\") " Feb 19 10:28:11 crc kubenswrapper[4811]: I0219 10:28:11.741699 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/918b0763-de1c-49fa-8ade-2410fb2e1af3-config-data-custom\") pod \"918b0763-de1c-49fa-8ade-2410fb2e1af3\" (UID: \"918b0763-de1c-49fa-8ade-2410fb2e1af3\") " Feb 19 10:28:11 crc kubenswrapper[4811]: I0219 10:28:11.741696 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/918b0763-de1c-49fa-8ade-2410fb2e1af3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "918b0763-de1c-49fa-8ade-2410fb2e1af3" (UID: "918b0763-de1c-49fa-8ade-2410fb2e1af3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:28:11 crc kubenswrapper[4811]: I0219 10:28:11.741760 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918b0763-de1c-49fa-8ade-2410fb2e1af3-combined-ca-bundle\") pod \"918b0763-de1c-49fa-8ade-2410fb2e1af3\" (UID: \"918b0763-de1c-49fa-8ade-2410fb2e1af3\") " Feb 19 10:28:11 crc kubenswrapper[4811]: I0219 10:28:11.741817 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/918b0763-de1c-49fa-8ade-2410fb2e1af3-scripts\") pod \"918b0763-de1c-49fa-8ade-2410fb2e1af3\" (UID: \"918b0763-de1c-49fa-8ade-2410fb2e1af3\") " Feb 19 10:28:11 crc kubenswrapper[4811]: I0219 10:28:11.741899 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/918b0763-de1c-49fa-8ade-2410fb2e1af3-config-data\") pod \"918b0763-de1c-49fa-8ade-2410fb2e1af3\" (UID: \"918b0763-de1c-49fa-8ade-2410fb2e1af3\") " Feb 19 10:28:11 crc kubenswrapper[4811]: I0219 10:28:11.742045 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfs2c\" (UniqueName: \"kubernetes.io/projected/918b0763-de1c-49fa-8ade-2410fb2e1af3-kube-api-access-zfs2c\") pod \"918b0763-de1c-49fa-8ade-2410fb2e1af3\" (UID: \"918b0763-de1c-49fa-8ade-2410fb2e1af3\") " Feb 19 10:28:11 crc kubenswrapper[4811]: I0219 10:28:11.742080 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/918b0763-de1c-49fa-8ade-2410fb2e1af3-logs\") pod \"918b0763-de1c-49fa-8ade-2410fb2e1af3\" (UID: \"918b0763-de1c-49fa-8ade-2410fb2e1af3\") " Feb 19 10:28:11 crc kubenswrapper[4811]: I0219 10:28:11.742730 4811 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/918b0763-de1c-49fa-8ade-2410fb2e1af3-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:11 crc kubenswrapper[4811]: I0219 10:28:11.743314 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/918b0763-de1c-49fa-8ade-2410fb2e1af3-logs" (OuterVolumeSpecName: "logs") pod "918b0763-de1c-49fa-8ade-2410fb2e1af3" (UID: "918b0763-de1c-49fa-8ade-2410fb2e1af3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:28:11 crc kubenswrapper[4811]: I0219 10:28:11.821207 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/918b0763-de1c-49fa-8ade-2410fb2e1af3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "918b0763-de1c-49fa-8ade-2410fb2e1af3" (UID: "918b0763-de1c-49fa-8ade-2410fb2e1af3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:11 crc kubenswrapper[4811]: I0219 10:28:11.821850 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/918b0763-de1c-49fa-8ade-2410fb2e1af3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "918b0763-de1c-49fa-8ade-2410fb2e1af3" (UID: "918b0763-de1c-49fa-8ade-2410fb2e1af3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:11 crc kubenswrapper[4811]: I0219 10:28:11.822618 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/918b0763-de1c-49fa-8ade-2410fb2e1af3-kube-api-access-zfs2c" (OuterVolumeSpecName: "kube-api-access-zfs2c") pod "918b0763-de1c-49fa-8ade-2410fb2e1af3" (UID: "918b0763-de1c-49fa-8ade-2410fb2e1af3"). InnerVolumeSpecName "kube-api-access-zfs2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:28:11 crc kubenswrapper[4811]: I0219 10:28:11.836610 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/918b0763-de1c-49fa-8ade-2410fb2e1af3-scripts" (OuterVolumeSpecName: "scripts") pod "918b0763-de1c-49fa-8ade-2410fb2e1af3" (UID: "918b0763-de1c-49fa-8ade-2410fb2e1af3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:11 crc kubenswrapper[4811]: I0219 10:28:11.848036 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/918b0763-de1c-49fa-8ade-2410fb2e1af3-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:11 crc kubenswrapper[4811]: I0219 10:28:11.848073 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfs2c\" (UniqueName: \"kubernetes.io/projected/918b0763-de1c-49fa-8ade-2410fb2e1af3-kube-api-access-zfs2c\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:11 crc kubenswrapper[4811]: I0219 10:28:11.848086 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/918b0763-de1c-49fa-8ade-2410fb2e1af3-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:11 crc kubenswrapper[4811]: I0219 10:28:11.848096 4811 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/918b0763-de1c-49fa-8ade-2410fb2e1af3-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:11 crc kubenswrapper[4811]: I0219 10:28:11.848104 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918b0763-de1c-49fa-8ade-2410fb2e1af3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:11 crc kubenswrapper[4811]: I0219 10:28:11.901604 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/918b0763-de1c-49fa-8ade-2410fb2e1af3-config-data" (OuterVolumeSpecName: "config-data") pod "918b0763-de1c-49fa-8ade-2410fb2e1af3" (UID: "918b0763-de1c-49fa-8ade-2410fb2e1af3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:11 crc kubenswrapper[4811]: I0219 10:28:11.950654 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/918b0763-de1c-49fa-8ade-2410fb2e1af3-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:12 crc kubenswrapper[4811]: I0219 10:28:12.145829 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-68f45d64cb-4stgq" Feb 19 10:28:12 crc kubenswrapper[4811]: I0219 10:28:12.395884 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-68f45d64cb-4stgq" Feb 19 10:28:12 crc kubenswrapper[4811]: I0219 10:28:12.610279 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba","Type":"ContainerStarted","Data":"9eb04e82a89034fe952c9299ef9a8bad76f092f8d85bf1158dcce295b10a9d7a"} Feb 19 10:28:12 crc kubenswrapper[4811]: I0219 10:28:12.613825 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 10:28:12 crc kubenswrapper[4811]: I0219 10:28:12.617842 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"918b0763-de1c-49fa-8ade-2410fb2e1af3","Type":"ContainerDied","Data":"70ac7e24b5f0a96c65b53f2e502e702d1d43b0c6f77cc33e7a9597359087ab9f"} Feb 19 10:28:12 crc kubenswrapper[4811]: I0219 10:28:12.617943 4811 scope.go:117] "RemoveContainer" containerID="80fe05657dcae2b82e7e2a7cd50c3104437b9b2d12bbcaf2dc56952a4c107c0e" Feb 19 10:28:12 crc kubenswrapper[4811]: I0219 10:28:12.724855 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6df884f584-x4z2k"] Feb 19 10:28:12 crc kubenswrapper[4811]: E0219 10:28:12.725375 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f" containerName="horizon" Feb 19 10:28:12 crc kubenswrapper[4811]: I0219 10:28:12.725408 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f" containerName="horizon" Feb 19 10:28:12 crc kubenswrapper[4811]: E0219 10:28:12.725421 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="918b0763-de1c-49fa-8ade-2410fb2e1af3" containerName="cinder-api" Feb 19 10:28:12 crc kubenswrapper[4811]: I0219 10:28:12.725427 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="918b0763-de1c-49fa-8ade-2410fb2e1af3" containerName="cinder-api" Feb 19 10:28:12 crc kubenswrapper[4811]: E0219 10:28:12.725466 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="918b0763-de1c-49fa-8ade-2410fb2e1af3" containerName="cinder-api-log" Feb 19 10:28:12 crc kubenswrapper[4811]: I0219 10:28:12.725473 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="918b0763-de1c-49fa-8ade-2410fb2e1af3" containerName="cinder-api-log" Feb 19 10:28:12 crc kubenswrapper[4811]: E0219 10:28:12.725488 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82bf6795-28db-4abd-923a-9706c1ca71d5" containerName="init" Feb 19 10:28:12 crc kubenswrapper[4811]: I0219 10:28:12.725493 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="82bf6795-28db-4abd-923a-9706c1ca71d5" containerName="init" Feb 19 10:28:12 crc kubenswrapper[4811]: E0219 10:28:12.725501 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f" containerName="horizon-log" Feb 19 10:28:12 crc kubenswrapper[4811]: I0219 10:28:12.725508 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f" containerName="horizon-log" Feb 19 10:28:12 crc kubenswrapper[4811]: E0219 10:28:12.725521 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd777ce0-fb21-405b-8619-19318d2057d4" containerName="horizon-log" Feb 19 10:28:12 crc kubenswrapper[4811]: I0219 10:28:12.725542 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd777ce0-fb21-405b-8619-19318d2057d4" containerName="horizon-log" Feb 19 10:28:12 crc kubenswrapper[4811]: E0219 10:28:12.725559 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd777ce0-fb21-405b-8619-19318d2057d4" containerName="horizon" Feb 19 10:28:12 crc kubenswrapper[4811]: I0219 10:28:12.725565 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd777ce0-fb21-405b-8619-19318d2057d4" containerName="horizon" Feb 19 10:28:12 crc kubenswrapper[4811]: I0219 10:28:12.725776 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f" containerName="horizon" Feb 19 10:28:12 crc kubenswrapper[4811]: I0219 10:28:12.725792 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="918b0763-de1c-49fa-8ade-2410fb2e1af3" containerName="cinder-api" Feb 19 10:28:12 crc kubenswrapper[4811]: I0219 10:28:12.725800 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd777ce0-fb21-405b-8619-19318d2057d4" containerName="horizon" Feb 19 10:28:12 crc kubenswrapper[4811]: I0219 10:28:12.725809 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8bb664f-8ad6-4dcd-a8ed-77b319ccf64f" containerName="horizon-log" Feb 19 10:28:12 crc kubenswrapper[4811]: I0219 10:28:12.725816 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd777ce0-fb21-405b-8619-19318d2057d4" containerName="horizon-log" Feb 19 10:28:12 crc kubenswrapper[4811]: I0219 10:28:12.725826 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="82bf6795-28db-4abd-923a-9706c1ca71d5" containerName="init" Feb 19 10:28:12 crc kubenswrapper[4811]: I0219 10:28:12.725852 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="918b0763-de1c-49fa-8ade-2410fb2e1af3" containerName="cinder-api-log" Feb 19 10:28:12 crc kubenswrapper[4811]: I0219 10:28:12.727238 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6df884f584-x4z2k" Feb 19 10:28:12 crc kubenswrapper[4811]: I0219 10:28:12.821807 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 19 10:28:12 crc kubenswrapper[4811]: I0219 10:28:12.823123 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 19 10:28:12 crc kubenswrapper[4811]: I0219 10:28:12.853728 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="5e15f023-b95a-479c-a3b5-902a4b77ff7a" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.157:8080/\": dial tcp 10.217.0.157:8080: connect: connection refused" Feb 19 10:28:12 crc kubenswrapper[4811]: I0219 10:28:12.892979 4811 scope.go:117] "RemoveContainer" containerID="803dde998d1dac74a7881d56b130a1eec1ec2e2c13549cbffee678d901cb1dc8" Feb 19 10:28:12 crc kubenswrapper[4811]: I0219 10:28:12.942667 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7db31a7-4566-4027-81f8-af4496f7dc07-logs\") pod \"placement-6df884f584-x4z2k\" (UID: \"f7db31a7-4566-4027-81f8-af4496f7dc07\") " pod="openstack/placement-6df884f584-x4z2k" Feb 19 10:28:12 crc kubenswrapper[4811]: I0219 10:28:12.942809 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7db31a7-4566-4027-81f8-af4496f7dc07-internal-tls-certs\") pod \"placement-6df884f584-x4z2k\" (UID: \"f7db31a7-4566-4027-81f8-af4496f7dc07\") " pod="openstack/placement-6df884f584-x4z2k" Feb 19 10:28:12 crc kubenswrapper[4811]: I0219 10:28:12.944174 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7db31a7-4566-4027-81f8-af4496f7dc07-config-data\") pod \"placement-6df884f584-x4z2k\" (UID: \"f7db31a7-4566-4027-81f8-af4496f7dc07\") " pod="openstack/placement-6df884f584-x4z2k" Feb 19 10:28:12 crc kubenswrapper[4811]: I0219 10:28:12.944271 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7db31a7-4566-4027-81f8-af4496f7dc07-public-tls-certs\") pod \"placement-6df884f584-x4z2k\" (UID: \"f7db31a7-4566-4027-81f8-af4496f7dc07\") " pod="openstack/placement-6df884f584-x4z2k" Feb 19 10:28:12 crc kubenswrapper[4811]: I0219 10:28:12.944468 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbtgh\" (UniqueName: \"kubernetes.io/projected/f7db31a7-4566-4027-81f8-af4496f7dc07-kube-api-access-hbtgh\") pod \"placement-6df884f584-x4z2k\" (UID: \"f7db31a7-4566-4027-81f8-af4496f7dc07\") " pod="openstack/placement-6df884f584-x4z2k" Feb 19 10:28:12 crc kubenswrapper[4811]: I0219 10:28:12.944606 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7db31a7-4566-4027-81f8-af4496f7dc07-scripts\") pod \"placement-6df884f584-x4z2k\" (UID: \"f7db31a7-4566-4027-81f8-af4496f7dc07\") " pod="openstack/placement-6df884f584-x4z2k" Feb 19 10:28:12 crc kubenswrapper[4811]: I0219 10:28:12.944745 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7db31a7-4566-4027-81f8-af4496f7dc07-combined-ca-bundle\") pod \"placement-6df884f584-x4z2k\" (UID: \"f7db31a7-4566-4027-81f8-af4496f7dc07\") " pod="openstack/placement-6df884f584-x4z2k" Feb 19 10:28:12 crc kubenswrapper[4811]: I0219 10:28:12.955555 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6df884f584-x4z2k"] Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.015559 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.049606 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7db31a7-4566-4027-81f8-af4496f7dc07-config-data\") pod \"placement-6df884f584-x4z2k\" (UID: \"f7db31a7-4566-4027-81f8-af4496f7dc07\") " pod="openstack/placement-6df884f584-x4z2k" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.049656 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7db31a7-4566-4027-81f8-af4496f7dc07-public-tls-certs\") pod \"placement-6df884f584-x4z2k\" (UID: \"f7db31a7-4566-4027-81f8-af4496f7dc07\") " pod="openstack/placement-6df884f584-x4z2k" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.049705 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbtgh\" (UniqueName: \"kubernetes.io/projected/f7db31a7-4566-4027-81f8-af4496f7dc07-kube-api-access-hbtgh\") pod \"placement-6df884f584-x4z2k\" (UID: \"f7db31a7-4566-4027-81f8-af4496f7dc07\") " pod="openstack/placement-6df884f584-x4z2k" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.049744 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7db31a7-4566-4027-81f8-af4496f7dc07-scripts\") pod \"placement-6df884f584-x4z2k\" (UID: \"f7db31a7-4566-4027-81f8-af4496f7dc07\") " pod="openstack/placement-6df884f584-x4z2k" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.049778 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7db31a7-4566-4027-81f8-af4496f7dc07-combined-ca-bundle\") pod \"placement-6df884f584-x4z2k\" (UID: \"f7db31a7-4566-4027-81f8-af4496f7dc07\") " pod="openstack/placement-6df884f584-x4z2k" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.049835 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7db31a7-4566-4027-81f8-af4496f7dc07-logs\") pod \"placement-6df884f584-x4z2k\" (UID: \"f7db31a7-4566-4027-81f8-af4496f7dc07\") " pod="openstack/placement-6df884f584-x4z2k" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.049861 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7db31a7-4566-4027-81f8-af4496f7dc07-internal-tls-certs\") pod \"placement-6df884f584-x4z2k\" (UID: \"f7db31a7-4566-4027-81f8-af4496f7dc07\") " pod="openstack/placement-6df884f584-x4z2k" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.051336 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7db31a7-4566-4027-81f8-af4496f7dc07-logs\") pod \"placement-6df884f584-x4z2k\" (UID: \"f7db31a7-4566-4027-81f8-af4496f7dc07\") " pod="openstack/placement-6df884f584-x4z2k" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.061949 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7db31a7-4566-4027-81f8-af4496f7dc07-internal-tls-certs\") pod \"placement-6df884f584-x4z2k\" (UID: \"f7db31a7-4566-4027-81f8-af4496f7dc07\") " pod="openstack/placement-6df884f584-x4z2k" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.062556 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7db31a7-4566-4027-81f8-af4496f7dc07-config-data\") pod \"placement-6df884f584-x4z2k\" (UID: \"f7db31a7-4566-4027-81f8-af4496f7dc07\") " pod="openstack/placement-6df884f584-x4z2k" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.063718 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7db31a7-4566-4027-81f8-af4496f7dc07-combined-ca-bundle\") pod \"placement-6df884f584-x4z2k\" (UID: \"f7db31a7-4566-4027-81f8-af4496f7dc07\") " pod="openstack/placement-6df884f584-x4z2k" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.064523 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.068083 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7db31a7-4566-4027-81f8-af4496f7dc07-scripts\") pod \"placement-6df884f584-x4z2k\" (UID: \"f7db31a7-4566-4027-81f8-af4496f7dc07\") " pod="openstack/placement-6df884f584-x4z2k" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.076560 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.081913 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7db31a7-4566-4027-81f8-af4496f7dc07-public-tls-certs\") pod \"placement-6df884f584-x4z2k\" (UID: \"f7db31a7-4566-4027-81f8-af4496f7dc07\") " pod="openstack/placement-6df884f584-x4z2k" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.088942 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.089019 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.089267 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.092900 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbtgh\" (UniqueName: \"kubernetes.io/projected/f7db31a7-4566-4027-81f8-af4496f7dc07-kube-api-access-hbtgh\") pod \"placement-6df884f584-x4z2k\" (UID: \"f7db31a7-4566-4027-81f8-af4496f7dc07\") " pod="openstack/placement-6df884f584-x4z2k" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.106600 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.131890 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6df884f584-x4z2k" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.262318 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebb19588-7922-460c-9368-05aaf6706b52-scripts\") pod \"cinder-api-0\" (UID: \"ebb19588-7922-460c-9368-05aaf6706b52\") " pod="openstack/cinder-api-0" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.262858 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebb19588-7922-460c-9368-05aaf6706b52-config-data\") pod \"cinder-api-0\" (UID: \"ebb19588-7922-460c-9368-05aaf6706b52\") " pod="openstack/cinder-api-0" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.262901 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ebb19588-7922-460c-9368-05aaf6706b52-config-data-custom\") pod \"cinder-api-0\" (UID: \"ebb19588-7922-460c-9368-05aaf6706b52\") " pod="openstack/cinder-api-0" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.262962 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m7n9\" (UniqueName: \"kubernetes.io/projected/ebb19588-7922-460c-9368-05aaf6706b52-kube-api-access-9m7n9\") pod \"cinder-api-0\" (UID: \"ebb19588-7922-460c-9368-05aaf6706b52\") " pod="openstack/cinder-api-0" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.263023 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebb19588-7922-460c-9368-05aaf6706b52-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"ebb19588-7922-460c-9368-05aaf6706b52\") " pod="openstack/cinder-api-0" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.263046 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebb19588-7922-460c-9368-05aaf6706b52-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ebb19588-7922-460c-9368-05aaf6706b52\") " pod="openstack/cinder-api-0" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.263073 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebb19588-7922-460c-9368-05aaf6706b52-logs\") pod \"cinder-api-0\" (UID: \"ebb19588-7922-460c-9368-05aaf6706b52\") " pod="openstack/cinder-api-0" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.263110 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebb19588-7922-460c-9368-05aaf6706b52-public-tls-certs\") pod \"cinder-api-0\" (UID: \"ebb19588-7922-460c-9368-05aaf6706b52\") " pod="openstack/cinder-api-0" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.263138 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ebb19588-7922-460c-9368-05aaf6706b52-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ebb19588-7922-460c-9368-05aaf6706b52\") " pod="openstack/cinder-api-0" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.364284 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebb19588-7922-460c-9368-05aaf6706b52-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"ebb19588-7922-460c-9368-05aaf6706b52\") " pod="openstack/cinder-api-0" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.364330 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebb19588-7922-460c-9368-05aaf6706b52-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ebb19588-7922-460c-9368-05aaf6706b52\") " pod="openstack/cinder-api-0" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.364353 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebb19588-7922-460c-9368-05aaf6706b52-logs\") pod \"cinder-api-0\" (UID: \"ebb19588-7922-460c-9368-05aaf6706b52\") " pod="openstack/cinder-api-0" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.366694 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebb19588-7922-460c-9368-05aaf6706b52-public-tls-certs\") pod \"cinder-api-0\" (UID: \"ebb19588-7922-460c-9368-05aaf6706b52\") " pod="openstack/cinder-api-0" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.366729 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ebb19588-7922-460c-9368-05aaf6706b52-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ebb19588-7922-460c-9368-05aaf6706b52\") " pod="openstack/cinder-api-0" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.366765 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebb19588-7922-460c-9368-05aaf6706b52-scripts\") pod \"cinder-api-0\" (UID: \"ebb19588-7922-460c-9368-05aaf6706b52\") " pod="openstack/cinder-api-0" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.366806 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebb19588-7922-460c-9368-05aaf6706b52-config-data\") pod \"cinder-api-0\" (UID: \"ebb19588-7922-460c-9368-05aaf6706b52\") " pod="openstack/cinder-api-0" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.367506 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ebb19588-7922-460c-9368-05aaf6706b52-config-data-custom\") pod \"cinder-api-0\" (UID: \"ebb19588-7922-460c-9368-05aaf6706b52\") " pod="openstack/cinder-api-0" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.367604 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m7n9\" (UniqueName: \"kubernetes.io/projected/ebb19588-7922-460c-9368-05aaf6706b52-kube-api-access-9m7n9\") pod \"cinder-api-0\" (UID: \"ebb19588-7922-460c-9368-05aaf6706b52\") " pod="openstack/cinder-api-0" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.368830 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ebb19588-7922-460c-9368-05aaf6706b52-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ebb19588-7922-460c-9368-05aaf6706b52\") " pod="openstack/cinder-api-0" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.371184 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebb19588-7922-460c-9368-05aaf6706b52-logs\") pod \"cinder-api-0\" (UID: \"ebb19588-7922-460c-9368-05aaf6706b52\") " pod="openstack/cinder-api-0" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.374581 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebb19588-7922-460c-9368-05aaf6706b52-scripts\") pod \"cinder-api-0\" (UID: \"ebb19588-7922-460c-9368-05aaf6706b52\") " pod="openstack/cinder-api-0" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.379997 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebb19588-7922-460c-9368-05aaf6706b52-public-tls-certs\") pod \"cinder-api-0\" (UID: \"ebb19588-7922-460c-9368-05aaf6706b52\") " pod="openstack/cinder-api-0" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.385224 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebb19588-7922-460c-9368-05aaf6706b52-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"ebb19588-7922-460c-9368-05aaf6706b52\") " pod="openstack/cinder-api-0" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.386041 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebb19588-7922-460c-9368-05aaf6706b52-config-data\") pod \"cinder-api-0\" (UID: \"ebb19588-7922-460c-9368-05aaf6706b52\") " pod="openstack/cinder-api-0" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.387158 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ebb19588-7922-460c-9368-05aaf6706b52-config-data-custom\") pod \"cinder-api-0\" (UID: \"ebb19588-7922-460c-9368-05aaf6706b52\") " pod="openstack/cinder-api-0" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.401975 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m7n9\" (UniqueName: \"kubernetes.io/projected/ebb19588-7922-460c-9368-05aaf6706b52-kube-api-access-9m7n9\") pod \"cinder-api-0\" (UID: \"ebb19588-7922-460c-9368-05aaf6706b52\") " pod="openstack/cinder-api-0" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.403273 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebb19588-7922-460c-9368-05aaf6706b52-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ebb19588-7922-460c-9368-05aaf6706b52\") " pod="openstack/cinder-api-0" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.587651 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.632205 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba","Type":"ContainerStarted","Data":"8433db333c723c8a14bcbd03ad4d846e7838bc0b6c830f98613eea3b083f3593"} Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.745151 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-75675fb4db-4qcj7" Feb 19 10:28:13 crc kubenswrapper[4811]: I0219 10:28:13.821515 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6df884f584-x4z2k"] Feb 19 10:28:14 crc kubenswrapper[4811]: I0219 10:28:14.227150 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 10:28:14 crc kubenswrapper[4811]: W0219 10:28:14.233583 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebb19588_7922_460c_9368_05aaf6706b52.slice/crio-a014c9fbc71ffa5eafe82778473c01cccbd821c4a024ae4aae1ea7d646b5b9a7 WatchSource:0}: Error finding container a014c9fbc71ffa5eafe82778473c01cccbd821c4a024ae4aae1ea7d646b5b9a7: Status 404 returned error can't find the container with id a014c9fbc71ffa5eafe82778473c01cccbd821c4a024ae4aae1ea7d646b5b9a7 Feb 19 10:28:14 crc kubenswrapper[4811]: I0219 10:28:14.278925 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-l669h" Feb 19 10:28:14 crc kubenswrapper[4811]: I0219 10:28:14.357396 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-f5hrk"] Feb 19 10:28:14 crc kubenswrapper[4811]: I0219 10:28:14.357782 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-f5hrk" podUID="4500564b-03cc-41f6-9c64-8601e8e279b4" containerName="dnsmasq-dns" containerID="cri-o://bbec92bc055f0a5cf43754d2e5274ae1f5fabf734e051fa78ed7e176bc3ac2a3" gracePeriod=10 Feb 19 10:28:14 crc kubenswrapper[4811]: I0219 10:28:14.603873 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="918b0763-de1c-49fa-8ade-2410fb2e1af3" path="/var/lib/kubelet/pods/918b0763-de1c-49fa-8ade-2410fb2e1af3/volumes" Feb 19 10:28:14 crc kubenswrapper[4811]: I0219 10:28:14.682836 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6df884f584-x4z2k" event={"ID":"f7db31a7-4566-4027-81f8-af4496f7dc07","Type":"ContainerStarted","Data":"b549a9964ff8b55621ecd1980e0cdab237b0e39c15639d98d3aeb3e847286c28"} Feb 19 10:28:14 crc kubenswrapper[4811]: I0219 10:28:14.682957 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6df884f584-x4z2k" event={"ID":"f7db31a7-4566-4027-81f8-af4496f7dc07","Type":"ContainerStarted","Data":"6e31f22cccd0aca65c00bc315216765890b79ff7dfc9413b0d37667a6b04b653"} Feb 19 10:28:14 crc kubenswrapper[4811]: I0219 10:28:14.682976 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6df884f584-x4z2k" event={"ID":"f7db31a7-4566-4027-81f8-af4496f7dc07","Type":"ContainerStarted","Data":"d4384020aa8e7ee2170fbc3ca09156e1d9074550c078a01ebffaa1afaa1bdb23"} Feb 19 10:28:14 crc kubenswrapper[4811]: I0219 10:28:14.685694 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6df884f584-x4z2k" Feb 19 10:28:14 crc kubenswrapper[4811]: I0219 10:28:14.685768 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6df884f584-x4z2k" Feb 19 10:28:14 crc kubenswrapper[4811]: I0219 10:28:14.698726 4811 generic.go:334] "Generic (PLEG): container finished" podID="4500564b-03cc-41f6-9c64-8601e8e279b4" containerID="bbec92bc055f0a5cf43754d2e5274ae1f5fabf734e051fa78ed7e176bc3ac2a3" exitCode=0 Feb 19 10:28:14 crc kubenswrapper[4811]: I0219 10:28:14.698858 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-f5hrk" event={"ID":"4500564b-03cc-41f6-9c64-8601e8e279b4","Type":"ContainerDied","Data":"bbec92bc055f0a5cf43754d2e5274ae1f5fabf734e051fa78ed7e176bc3ac2a3"} Feb 19 10:28:14 crc kubenswrapper[4811]: I0219 10:28:14.703394 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ebb19588-7922-460c-9368-05aaf6706b52","Type":"ContainerStarted","Data":"a014c9fbc71ffa5eafe82778473c01cccbd821c4a024ae4aae1ea7d646b5b9a7"} Feb 19 10:28:14 crc kubenswrapper[4811]: I0219 10:28:14.710878 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6df884f584-x4z2k" podStartSLOduration=2.710855997 podStartE2EDuration="2.710855997s" podCreationTimestamp="2026-02-19 10:28:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:28:14.709643086 +0000 UTC m=+1183.236107772" watchObservedRunningTime="2026-02-19 10:28:14.710855997 +0000 UTC m=+1183.237320683" Feb 19 10:28:15 crc kubenswrapper[4811]: I0219 10:28:15.605692 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-f5hrk" Feb 19 10:28:15 crc kubenswrapper[4811]: I0219 10:28:15.796132 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjkhx\" (UniqueName: \"kubernetes.io/projected/4500564b-03cc-41f6-9c64-8601e8e279b4-kube-api-access-rjkhx\") pod \"4500564b-03cc-41f6-9c64-8601e8e279b4\" (UID: \"4500564b-03cc-41f6-9c64-8601e8e279b4\") " Feb 19 10:28:15 crc kubenswrapper[4811]: I0219 10:28:15.796304 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4500564b-03cc-41f6-9c64-8601e8e279b4-config\") pod \"4500564b-03cc-41f6-9c64-8601e8e279b4\" (UID: \"4500564b-03cc-41f6-9c64-8601e8e279b4\") " Feb 19 10:28:15 crc kubenswrapper[4811]: I0219 10:28:15.796378 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4500564b-03cc-41f6-9c64-8601e8e279b4-dns-swift-storage-0\") pod \"4500564b-03cc-41f6-9c64-8601e8e279b4\" (UID: \"4500564b-03cc-41f6-9c64-8601e8e279b4\") " Feb 19 10:28:15 crc kubenswrapper[4811]: I0219 10:28:15.796509 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4500564b-03cc-41f6-9c64-8601e8e279b4-ovsdbserver-nb\") pod \"4500564b-03cc-41f6-9c64-8601e8e279b4\" (UID: \"4500564b-03cc-41f6-9c64-8601e8e279b4\") " Feb 19 10:28:15 crc kubenswrapper[4811]: I0219 10:28:15.796558 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4500564b-03cc-41f6-9c64-8601e8e279b4-ovsdbserver-sb\") pod \"4500564b-03cc-41f6-9c64-8601e8e279b4\" (UID: \"4500564b-03cc-41f6-9c64-8601e8e279b4\") " Feb 19 10:28:15 crc kubenswrapper[4811]: I0219 10:28:15.796810 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4500564b-03cc-41f6-9c64-8601e8e279b4-dns-svc\") pod \"4500564b-03cc-41f6-9c64-8601e8e279b4\" (UID: \"4500564b-03cc-41f6-9c64-8601e8e279b4\") " Feb 19 10:28:15 crc kubenswrapper[4811]: I0219 10:28:15.826008 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4500564b-03cc-41f6-9c64-8601e8e279b4-kube-api-access-rjkhx" (OuterVolumeSpecName: "kube-api-access-rjkhx") pod "4500564b-03cc-41f6-9c64-8601e8e279b4" (UID: "4500564b-03cc-41f6-9c64-8601e8e279b4"). InnerVolumeSpecName "kube-api-access-rjkhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:28:15 crc kubenswrapper[4811]: I0219 10:28:15.844703 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-f5hrk" event={"ID":"4500564b-03cc-41f6-9c64-8601e8e279b4","Type":"ContainerDied","Data":"b151de5c0e60588d0d56aca4b12a477d32dd2ff0aec4563a274262af967a3a4f"} Feb 19 10:28:15 crc kubenswrapper[4811]: I0219 10:28:15.845169 4811 scope.go:117] "RemoveContainer" containerID="bbec92bc055f0a5cf43754d2e5274ae1f5fabf734e051fa78ed7e176bc3ac2a3" Feb 19 10:28:15 crc kubenswrapper[4811]: I0219 10:28:15.845344 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-f5hrk" Feb 19 10:28:15 crc kubenswrapper[4811]: I0219 10:28:15.856436 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ebb19588-7922-460c-9368-05aaf6706b52","Type":"ContainerStarted","Data":"d67a1f55961222a44ba1875bd4b4476ff1d208aadeb56fe3f457a38a8c2179f8"} Feb 19 10:28:15 crc kubenswrapper[4811]: I0219 10:28:15.881610 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4500564b-03cc-41f6-9c64-8601e8e279b4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4500564b-03cc-41f6-9c64-8601e8e279b4" (UID: "4500564b-03cc-41f6-9c64-8601e8e279b4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:28:15 crc kubenswrapper[4811]: I0219 10:28:15.883361 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4500564b-03cc-41f6-9c64-8601e8e279b4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4500564b-03cc-41f6-9c64-8601e8e279b4" (UID: "4500564b-03cc-41f6-9c64-8601e8e279b4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:28:15 crc kubenswrapper[4811]: I0219 10:28:15.900860 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjkhx\" (UniqueName: \"kubernetes.io/projected/4500564b-03cc-41f6-9c64-8601e8e279b4-kube-api-access-rjkhx\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:15 crc kubenswrapper[4811]: I0219 10:28:15.900887 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4500564b-03cc-41f6-9c64-8601e8e279b4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:15 crc kubenswrapper[4811]: I0219 10:28:15.900896 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4500564b-03cc-41f6-9c64-8601e8e279b4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:15 crc kubenswrapper[4811]: I0219 10:28:15.912851 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4500564b-03cc-41f6-9c64-8601e8e279b4-config" (OuterVolumeSpecName: "config") pod "4500564b-03cc-41f6-9c64-8601e8e279b4" (UID: "4500564b-03cc-41f6-9c64-8601e8e279b4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:28:15 crc kubenswrapper[4811]: I0219 10:28:15.949135 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4500564b-03cc-41f6-9c64-8601e8e279b4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4500564b-03cc-41f6-9c64-8601e8e279b4" (UID: "4500564b-03cc-41f6-9c64-8601e8e279b4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:28:16 crc kubenswrapper[4811]: I0219 10:28:16.007807 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4500564b-03cc-41f6-9c64-8601e8e279b4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4500564b-03cc-41f6-9c64-8601e8e279b4" (UID: "4500564b-03cc-41f6-9c64-8601e8e279b4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:28:16 crc kubenswrapper[4811]: I0219 10:28:16.013510 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4500564b-03cc-41f6-9c64-8601e8e279b4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:16 crc kubenswrapper[4811]: I0219 10:28:16.013564 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4500564b-03cc-41f6-9c64-8601e8e279b4-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:16 crc kubenswrapper[4811]: I0219 10:28:16.013579 4811 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4500564b-03cc-41f6-9c64-8601e8e279b4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:16 crc kubenswrapper[4811]: I0219 10:28:16.063666 4811 scope.go:117] "RemoveContainer" containerID="c9c1705b94a03717e17c8fb55608cbaf1df37636b5424789940567829b899856" Feb 19 10:28:16 crc kubenswrapper[4811]: I0219 10:28:16.204523 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-f5hrk"] Feb 19 10:28:16 crc kubenswrapper[4811]: I0219 10:28:16.211609 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-f5hrk"] Feb 19 10:28:16 crc kubenswrapper[4811]: I0219 10:28:16.595365 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4500564b-03cc-41f6-9c64-8601e8e279b4" path="/var/lib/kubelet/pods/4500564b-03cc-41f6-9c64-8601e8e279b4/volumes" Feb 19 10:28:16 crc kubenswrapper[4811]: I0219 10:28:16.867118 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ebb19588-7922-460c-9368-05aaf6706b52","Type":"ContainerStarted","Data":"de2bdbc00f9d6c4e01f1d53278314fc9bc98e25aa70779726608f860d477969e"} Feb 19 10:28:16 crc kubenswrapper[4811]: I0219 10:28:16.867297 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 19 10:28:16 crc kubenswrapper[4811]: I0219 10:28:16.871735 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba","Type":"ContainerStarted","Data":"8ab88e5f9e00222e95e08ed019c3e8f7ed106fbe8565e66d98104ef4839f0e11"} Feb 19 10:28:16 crc kubenswrapper[4811]: I0219 10:28:16.871903 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 10:28:16 crc kubenswrapper[4811]: I0219 10:28:16.893948 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.893911893 podStartE2EDuration="4.893911893s" podCreationTimestamp="2026-02-19 10:28:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:28:16.891067031 +0000 UTC m=+1185.417531707" watchObservedRunningTime="2026-02-19 10:28:16.893911893 +0000 UTC m=+1185.420376579" Feb 19 10:28:16 crc kubenswrapper[4811]: I0219 10:28:16.919747 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.244008298 podStartE2EDuration="10.919716952s" podCreationTimestamp="2026-02-19 10:28:06 +0000 UTC" firstStartedPulling="2026-02-19 10:28:09.638609348 +0000 UTC m=+1178.165074034" lastFinishedPulling="2026-02-19 10:28:15.314318002 +0000 UTC m=+1183.840782688" observedRunningTime="2026-02-19 10:28:16.910897617 +0000 UTC m=+1185.437362303" watchObservedRunningTime="2026-02-19 10:28:16.919716952 +0000 UTC m=+1185.446181638" Feb 19 10:28:17 crc kubenswrapper[4811]: I0219 10:28:17.176489 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6566bc4c4b-lsrcb" Feb 19 10:28:17 crc kubenswrapper[4811]: I0219 10:28:17.230146 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6566bc4c4b-lsrcb" Feb 19 10:28:17 crc kubenswrapper[4811]: I0219 10:28:17.344514 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-75675fb4db-4qcj7"] Feb 19 10:28:17 crc kubenswrapper[4811]: I0219 10:28:17.344768 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-75675fb4db-4qcj7" podUID="4cb1b23f-5500-410f-af9c-8e47ed4d0250" containerName="barbican-api-log" containerID="cri-o://d9f207c98146fdc8046629b93ab68c689058e595bc876f78079886f61b4393fc" gracePeriod=30 Feb 19 10:28:17 crc kubenswrapper[4811]: I0219 10:28:17.345321 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-75675fb4db-4qcj7" podUID="4cb1b23f-5500-410f-af9c-8e47ed4d0250" containerName="barbican-api" containerID="cri-o://669a26d64323bd95297ac199a63c329db830146faa04dfda34bb1fd46c4b1094" gracePeriod=30 Feb 19 10:28:17 crc kubenswrapper[4811]: I0219 10:28:17.888873 4811 generic.go:334] "Generic (PLEG): container finished" podID="4cb1b23f-5500-410f-af9c-8e47ed4d0250" containerID="d9f207c98146fdc8046629b93ab68c689058e595bc876f78079886f61b4393fc" exitCode=143 Feb 19 10:28:17 crc kubenswrapper[4811]: I0219 10:28:17.888963 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75675fb4db-4qcj7" event={"ID":"4cb1b23f-5500-410f-af9c-8e47ed4d0250","Type":"ContainerDied","Data":"d9f207c98146fdc8046629b93ab68c689058e595bc876f78079886f61b4393fc"} Feb 19 10:28:18 crc kubenswrapper[4811]: I0219 10:28:18.231936 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 19 10:28:18 crc kubenswrapper[4811]: I0219 10:28:18.307429 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 10:28:18 crc kubenswrapper[4811]: I0219 10:28:18.898987 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="5e15f023-b95a-479c-a3b5-902a4b77ff7a" containerName="cinder-scheduler" containerID="cri-o://2030cafd797152262c9791e7f19e3c59f8979f1dc37e03fef2c2a2214d4d1a3b" gracePeriod=30 Feb 19 10:28:18 crc kubenswrapper[4811]: I0219 10:28:18.899013 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="5e15f023-b95a-479c-a3b5-902a4b77ff7a" containerName="probe" containerID="cri-o://68aaeffb79e16bc81f2350cf738048350a83903c3b7091b09ece0f92d711b925" gracePeriod=30 Feb 19 10:28:19 crc kubenswrapper[4811]: I0219 10:28:19.400177 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-54bd9d4f8c-p9wzd" Feb 19 10:28:19 crc kubenswrapper[4811]: I0219 10:28:19.845225 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 19 10:28:19 crc kubenswrapper[4811]: E0219 10:28:19.846478 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4500564b-03cc-41f6-9c64-8601e8e279b4" containerName="dnsmasq-dns" Feb 19 10:28:19 crc kubenswrapper[4811]: I0219 10:28:19.846506 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4500564b-03cc-41f6-9c64-8601e8e279b4" containerName="dnsmasq-dns" Feb 19 10:28:19 crc kubenswrapper[4811]: E0219 10:28:19.846555 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4500564b-03cc-41f6-9c64-8601e8e279b4" containerName="init" Feb 19 10:28:19 crc kubenswrapper[4811]: I0219 10:28:19.846566 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4500564b-03cc-41f6-9c64-8601e8e279b4" containerName="init" Feb 19 10:28:19 crc kubenswrapper[4811]: I0219 10:28:19.847165 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="4500564b-03cc-41f6-9c64-8601e8e279b4" containerName="dnsmasq-dns" Feb 19 10:28:19 crc kubenswrapper[4811]: I0219 10:28:19.848092 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 10:28:19 crc kubenswrapper[4811]: I0219 10:28:19.853285 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 19 10:28:19 crc kubenswrapper[4811]: I0219 10:28:19.853397 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-drgjp" Feb 19 10:28:19 crc kubenswrapper[4811]: I0219 10:28:19.853432 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 19 10:28:19 crc kubenswrapper[4811]: I0219 10:28:19.858764 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 10:28:19 crc kubenswrapper[4811]: I0219 10:28:19.917372 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c6494cd0-c929-4278-974b-0839193092f1-openstack-config\") pod \"openstackclient\" (UID: \"c6494cd0-c929-4278-974b-0839193092f1\") " pod="openstack/openstackclient" Feb 19 10:28:19 crc kubenswrapper[4811]: I0219 10:28:19.917531 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krs62\" (UniqueName: \"kubernetes.io/projected/c6494cd0-c929-4278-974b-0839193092f1-kube-api-access-krs62\") pod \"openstackclient\" (UID: \"c6494cd0-c929-4278-974b-0839193092f1\") " pod="openstack/openstackclient" Feb 19 10:28:19 crc kubenswrapper[4811]: I0219 10:28:19.917725 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c6494cd0-c929-4278-974b-0839193092f1-openstack-config-secret\") pod \"openstackclient\" (UID: \"c6494cd0-c929-4278-974b-0839193092f1\") " pod="openstack/openstackclient" Feb 19 10:28:19 crc kubenswrapper[4811]: I0219 10:28:19.917785 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6494cd0-c929-4278-974b-0839193092f1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c6494cd0-c929-4278-974b-0839193092f1\") " pod="openstack/openstackclient" Feb 19 10:28:20 crc kubenswrapper[4811]: I0219 10:28:20.019704 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krs62\" (UniqueName: \"kubernetes.io/projected/c6494cd0-c929-4278-974b-0839193092f1-kube-api-access-krs62\") pod \"openstackclient\" (UID: \"c6494cd0-c929-4278-974b-0839193092f1\") " pod="openstack/openstackclient" Feb 19 10:28:20 crc kubenswrapper[4811]: I0219 10:28:20.019965 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c6494cd0-c929-4278-974b-0839193092f1-openstack-config-secret\") pod \"openstackclient\" (UID: \"c6494cd0-c929-4278-974b-0839193092f1\") " pod="openstack/openstackclient" Feb 19 10:28:20 crc kubenswrapper[4811]: I0219 10:28:20.020003 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6494cd0-c929-4278-974b-0839193092f1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c6494cd0-c929-4278-974b-0839193092f1\") " pod="openstack/openstackclient" Feb 19 10:28:20 crc kubenswrapper[4811]: I0219 10:28:20.020132 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c6494cd0-c929-4278-974b-0839193092f1-openstack-config\") pod \"openstackclient\" (UID: \"c6494cd0-c929-4278-974b-0839193092f1\") " pod="openstack/openstackclient" Feb 19 10:28:20 crc kubenswrapper[4811]: I0219 10:28:20.021177 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c6494cd0-c929-4278-974b-0839193092f1-openstack-config\") pod \"openstackclient\" (UID: \"c6494cd0-c929-4278-974b-0839193092f1\") " pod="openstack/openstackclient" Feb 19 10:28:20 crc kubenswrapper[4811]: I0219 10:28:20.033428 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c6494cd0-c929-4278-974b-0839193092f1-openstack-config-secret\") pod \"openstackclient\" (UID: \"c6494cd0-c929-4278-974b-0839193092f1\") " pod="openstack/openstackclient" Feb 19 10:28:20 crc kubenswrapper[4811]: I0219 10:28:20.041325 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6494cd0-c929-4278-974b-0839193092f1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c6494cd0-c929-4278-974b-0839193092f1\") " pod="openstack/openstackclient" Feb 19 10:28:20 crc kubenswrapper[4811]: I0219 10:28:20.049670 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krs62\" (UniqueName: \"kubernetes.io/projected/c6494cd0-c929-4278-974b-0839193092f1-kube-api-access-krs62\") pod \"openstackclient\" (UID: \"c6494cd0-c929-4278-974b-0839193092f1\") " pod="openstack/openstackclient" Feb 19 10:28:20 crc kubenswrapper[4811]: I0219 10:28:20.181526 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 10:28:20 crc kubenswrapper[4811]: I0219 10:28:20.641934 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 10:28:20 crc kubenswrapper[4811]: I0219 10:28:20.879850 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e15f023-b95a-479c-a3b5-902a4b77ff7a-config-data-custom\") pod \"5e15f023-b95a-479c-a3b5-902a4b77ff7a\" (UID: \"5e15f023-b95a-479c-a3b5-902a4b77ff7a\") " Feb 19 10:28:20 crc kubenswrapper[4811]: I0219 10:28:20.879943 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5e15f023-b95a-479c-a3b5-902a4b77ff7a-etc-machine-id\") pod \"5e15f023-b95a-479c-a3b5-902a4b77ff7a\" (UID: \"5e15f023-b95a-479c-a3b5-902a4b77ff7a\") " Feb 19 10:28:20 crc kubenswrapper[4811]: I0219 10:28:20.880053 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e15f023-b95a-479c-a3b5-902a4b77ff7a-combined-ca-bundle\") pod \"5e15f023-b95a-479c-a3b5-902a4b77ff7a\" (UID: \"5e15f023-b95a-479c-a3b5-902a4b77ff7a\") " Feb 19 10:28:20 crc kubenswrapper[4811]: I0219 10:28:20.880288 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv6tx\" (UniqueName: \"kubernetes.io/projected/5e15f023-b95a-479c-a3b5-902a4b77ff7a-kube-api-access-gv6tx\") pod \"5e15f023-b95a-479c-a3b5-902a4b77ff7a\" (UID: \"5e15f023-b95a-479c-a3b5-902a4b77ff7a\") " Feb 19 10:28:20 crc kubenswrapper[4811]: I0219 10:28:20.880330 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e15f023-b95a-479c-a3b5-902a4b77ff7a-scripts\") pod \"5e15f023-b95a-479c-a3b5-902a4b77ff7a\" (UID: \"5e15f023-b95a-479c-a3b5-902a4b77ff7a\") " Feb 19 10:28:20 crc kubenswrapper[4811]: I0219 10:28:20.880405 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e15f023-b95a-479c-a3b5-902a4b77ff7a-config-data\") pod \"5e15f023-b95a-479c-a3b5-902a4b77ff7a\" (UID: \"5e15f023-b95a-479c-a3b5-902a4b77ff7a\") " Feb 19 10:28:20 crc kubenswrapper[4811]: I0219 10:28:20.891985 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 10:28:20 crc kubenswrapper[4811]: I0219 10:28:20.893626 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e15f023-b95a-479c-a3b5-902a4b77ff7a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5e15f023-b95a-479c-a3b5-902a4b77ff7a" (UID: "5e15f023-b95a-479c-a3b5-902a4b77ff7a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 10:28:20 crc kubenswrapper[4811]: I0219 10:28:20.896215 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e15f023-b95a-479c-a3b5-902a4b77ff7a-scripts" (OuterVolumeSpecName: "scripts") pod "5e15f023-b95a-479c-a3b5-902a4b77ff7a" (UID: "5e15f023-b95a-479c-a3b5-902a4b77ff7a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:20 crc kubenswrapper[4811]: I0219 10:28:20.900909 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e15f023-b95a-479c-a3b5-902a4b77ff7a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5e15f023-b95a-479c-a3b5-902a4b77ff7a" (UID: "5e15f023-b95a-479c-a3b5-902a4b77ff7a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:20 crc kubenswrapper[4811]: I0219 10:28:20.912862 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e15f023-b95a-479c-a3b5-902a4b77ff7a-kube-api-access-gv6tx" (OuterVolumeSpecName: "kube-api-access-gv6tx") pod "5e15f023-b95a-479c-a3b5-902a4b77ff7a" (UID: "5e15f023-b95a-479c-a3b5-902a4b77ff7a"). InnerVolumeSpecName "kube-api-access-gv6tx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:28:20 crc kubenswrapper[4811]: I0219 10:28:20.990488 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gv6tx\" (UniqueName: \"kubernetes.io/projected/5e15f023-b95a-479c-a3b5-902a4b77ff7a-kube-api-access-gv6tx\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:20 crc kubenswrapper[4811]: I0219 10:28:20.990548 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e15f023-b95a-479c-a3b5-902a4b77ff7a-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:20 crc kubenswrapper[4811]: I0219 10:28:20.990562 4811 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e15f023-b95a-479c-a3b5-902a4b77ff7a-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:20 crc kubenswrapper[4811]: I0219 10:28:20.990574 4811 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5e15f023-b95a-479c-a3b5-902a4b77ff7a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:20 crc kubenswrapper[4811]: I0219 10:28:20.990959 4811 generic.go:334] "Generic (PLEG): container finished" podID="5e15f023-b95a-479c-a3b5-902a4b77ff7a" containerID="68aaeffb79e16bc81f2350cf738048350a83903c3b7091b09ece0f92d711b925" exitCode=0 Feb 19 10:28:20 crc kubenswrapper[4811]: I0219 10:28:20.990994 4811 generic.go:334] "Generic (PLEG): container finished" podID="5e15f023-b95a-479c-a3b5-902a4b77ff7a" containerID="2030cafd797152262c9791e7f19e3c59f8979f1dc37e03fef2c2a2214d4d1a3b" exitCode=0 Feb 19 10:28:20 crc kubenswrapper[4811]: I0219 10:28:20.991127 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5e15f023-b95a-479c-a3b5-902a4b77ff7a","Type":"ContainerDied","Data":"68aaeffb79e16bc81f2350cf738048350a83903c3b7091b09ece0f92d711b925"} Feb 19 10:28:20 crc kubenswrapper[4811]: I0219 10:28:20.991175 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5e15f023-b95a-479c-a3b5-902a4b77ff7a","Type":"ContainerDied","Data":"2030cafd797152262c9791e7f19e3c59f8979f1dc37e03fef2c2a2214d4d1a3b"} Feb 19 10:28:20 crc kubenswrapper[4811]: I0219 10:28:20.991187 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5e15f023-b95a-479c-a3b5-902a4b77ff7a","Type":"ContainerDied","Data":"e9903100a01cb195d82ab4843d1849274d0fefcae037bd39ae474a5e79a54bdb"} Feb 19 10:28:20 crc kubenswrapper[4811]: I0219 10:28:20.991218 4811 scope.go:117] "RemoveContainer" containerID="68aaeffb79e16bc81f2350cf738048350a83903c3b7091b09ece0f92d711b925" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.026033 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.038576 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c6494cd0-c929-4278-974b-0839193092f1","Type":"ContainerStarted","Data":"efcd6456f2916171e52b7036962ef667ab4eeb7f9e8bdabc35bf5fb6c6a6b79c"} Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.064654 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e15f023-b95a-479c-a3b5-902a4b77ff7a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e15f023-b95a-479c-a3b5-902a4b77ff7a" (UID: "5e15f023-b95a-479c-a3b5-902a4b77ff7a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.095718 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e15f023-b95a-479c-a3b5-902a4b77ff7a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.146680 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e15f023-b95a-479c-a3b5-902a4b77ff7a-config-data" (OuterVolumeSpecName: "config-data") pod "5e15f023-b95a-479c-a3b5-902a4b77ff7a" (UID: "5e15f023-b95a-479c-a3b5-902a4b77ff7a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.155145 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-75675fb4db-4qcj7" podUID="4cb1b23f-5500-410f-af9c-8e47ed4d0250" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": read tcp 10.217.0.2:51088->10.217.0.163:9311: read: connection reset by peer" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.155593 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-75675fb4db-4qcj7" podUID="4cb1b23f-5500-410f-af9c-8e47ed4d0250" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": read tcp 10.217.0.2:51090->10.217.0.163:9311: read: connection reset by peer" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.197439 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e15f023-b95a-479c-a3b5-902a4b77ff7a-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.259509 4811 scope.go:117] "RemoveContainer" containerID="2030cafd797152262c9791e7f19e3c59f8979f1dc37e03fef2c2a2214d4d1a3b" Feb 19 10:28:21 crc kubenswrapper[4811]: E0219 10:28:21.273693 4811 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4cb1b23f_5500_410f_af9c_8e47ed4d0250.slice/crio-conmon-669a26d64323bd95297ac199a63c329db830146faa04dfda34bb1fd46c4b1094.scope\": RecentStats: unable to find data in memory cache]" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.306615 4811 scope.go:117] "RemoveContainer" containerID="68aaeffb79e16bc81f2350cf738048350a83903c3b7091b09ece0f92d711b925" Feb 19 10:28:21 crc kubenswrapper[4811]: E0219 10:28:21.308053 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68aaeffb79e16bc81f2350cf738048350a83903c3b7091b09ece0f92d711b925\": container with ID starting with 68aaeffb79e16bc81f2350cf738048350a83903c3b7091b09ece0f92d711b925 not found: ID does not exist" containerID="68aaeffb79e16bc81f2350cf738048350a83903c3b7091b09ece0f92d711b925" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.308098 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68aaeffb79e16bc81f2350cf738048350a83903c3b7091b09ece0f92d711b925"} err="failed to get container status \"68aaeffb79e16bc81f2350cf738048350a83903c3b7091b09ece0f92d711b925\": rpc error: code = NotFound desc = could not find container \"68aaeffb79e16bc81f2350cf738048350a83903c3b7091b09ece0f92d711b925\": container with ID starting with 68aaeffb79e16bc81f2350cf738048350a83903c3b7091b09ece0f92d711b925 not found: ID does not exist" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.308124 4811 scope.go:117] "RemoveContainer" containerID="2030cafd797152262c9791e7f19e3c59f8979f1dc37e03fef2c2a2214d4d1a3b" Feb 19 10:28:21 crc kubenswrapper[4811]: E0219 10:28:21.308479 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2030cafd797152262c9791e7f19e3c59f8979f1dc37e03fef2c2a2214d4d1a3b\": container with ID starting with 2030cafd797152262c9791e7f19e3c59f8979f1dc37e03fef2c2a2214d4d1a3b not found: ID does not exist" containerID="2030cafd797152262c9791e7f19e3c59f8979f1dc37e03fef2c2a2214d4d1a3b" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.308534 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2030cafd797152262c9791e7f19e3c59f8979f1dc37e03fef2c2a2214d4d1a3b"} err="failed to get container status \"2030cafd797152262c9791e7f19e3c59f8979f1dc37e03fef2c2a2214d4d1a3b\": rpc error: code = NotFound desc = could not find container \"2030cafd797152262c9791e7f19e3c59f8979f1dc37e03fef2c2a2214d4d1a3b\": container with ID starting with 2030cafd797152262c9791e7f19e3c59f8979f1dc37e03fef2c2a2214d4d1a3b not found: ID does not exist" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.308575 4811 scope.go:117] "RemoveContainer" containerID="68aaeffb79e16bc81f2350cf738048350a83903c3b7091b09ece0f92d711b925" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.308919 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68aaeffb79e16bc81f2350cf738048350a83903c3b7091b09ece0f92d711b925"} err="failed to get container status \"68aaeffb79e16bc81f2350cf738048350a83903c3b7091b09ece0f92d711b925\": rpc error: code = NotFound desc = could not find container \"68aaeffb79e16bc81f2350cf738048350a83903c3b7091b09ece0f92d711b925\": container with ID starting with 68aaeffb79e16bc81f2350cf738048350a83903c3b7091b09ece0f92d711b925 not found: ID does not exist" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.308951 4811 scope.go:117] "RemoveContainer" containerID="2030cafd797152262c9791e7f19e3c59f8979f1dc37e03fef2c2a2214d4d1a3b" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.309247 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2030cafd797152262c9791e7f19e3c59f8979f1dc37e03fef2c2a2214d4d1a3b"} err="failed to get container status \"2030cafd797152262c9791e7f19e3c59f8979f1dc37e03fef2c2a2214d4d1a3b\": rpc error: code = NotFound desc = could not find container \"2030cafd797152262c9791e7f19e3c59f8979f1dc37e03fef2c2a2214d4d1a3b\": container with ID starting with 2030cafd797152262c9791e7f19e3c59f8979f1dc37e03fef2c2a2214d4d1a3b not found: ID does not exist" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.374527 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.401613 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.430923 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 10:28:21 crc kubenswrapper[4811]: E0219 10:28:21.431438 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e15f023-b95a-479c-a3b5-902a4b77ff7a" containerName="probe" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.431470 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e15f023-b95a-479c-a3b5-902a4b77ff7a" containerName="probe" Feb 19 10:28:21 crc kubenswrapper[4811]: E0219 10:28:21.431495 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e15f023-b95a-479c-a3b5-902a4b77ff7a" containerName="cinder-scheduler" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.431506 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e15f023-b95a-479c-a3b5-902a4b77ff7a" containerName="cinder-scheduler" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.431700 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e15f023-b95a-479c-a3b5-902a4b77ff7a" containerName="probe" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.431719 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e15f023-b95a-479c-a3b5-902a4b77ff7a" containerName="cinder-scheduler" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.432833 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.436114 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.441805 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.511318 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed\") " pod="openstack/cinder-scheduler-0" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.511654 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed\") " pod="openstack/cinder-scheduler-0" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.511946 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed-config-data\") pod \"cinder-scheduler-0\" (UID: \"1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed\") " pod="openstack/cinder-scheduler-0" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.512063 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed\") " pod="openstack/cinder-scheduler-0" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.512105 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgpvq\" (UniqueName: \"kubernetes.io/projected/1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed-kube-api-access-bgpvq\") pod \"cinder-scheduler-0\" (UID: \"1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed\") " pod="openstack/cinder-scheduler-0" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.512135 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed-scripts\") pod \"cinder-scheduler-0\" (UID: \"1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed\") " pod="openstack/cinder-scheduler-0" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.618646 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed\") " pod="openstack/cinder-scheduler-0" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.618719 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgpvq\" (UniqueName: \"kubernetes.io/projected/1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed-kube-api-access-bgpvq\") pod \"cinder-scheduler-0\" (UID: \"1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed\") " pod="openstack/cinder-scheduler-0" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.618747 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed-scripts\") pod \"cinder-scheduler-0\" (UID: \"1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed\") " pod="openstack/cinder-scheduler-0" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.618807 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed\") " pod="openstack/cinder-scheduler-0" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.618861 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed\") " pod="openstack/cinder-scheduler-0" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.618952 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed-config-data\") pod \"cinder-scheduler-0\" (UID: \"1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed\") " pod="openstack/cinder-scheduler-0" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.619345 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed\") " pod="openstack/cinder-scheduler-0" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.625347 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed\") " pod="openstack/cinder-scheduler-0" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.627070 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed-scripts\") pod \"cinder-scheduler-0\" (UID: \"1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed\") " pod="openstack/cinder-scheduler-0" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.628426 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed\") " pod="openstack/cinder-scheduler-0" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.638333 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgpvq\" (UniqueName: \"kubernetes.io/projected/1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed-kube-api-access-bgpvq\") pod \"cinder-scheduler-0\" (UID: \"1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed\") " pod="openstack/cinder-scheduler-0" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.642437 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed-config-data\") pod \"cinder-scheduler-0\" (UID: \"1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed\") " pod="openstack/cinder-scheduler-0" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.767731 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.863003 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75675fb4db-4qcj7" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.934513 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cb1b23f-5500-410f-af9c-8e47ed4d0250-config-data\") pod \"4cb1b23f-5500-410f-af9c-8e47ed4d0250\" (UID: \"4cb1b23f-5500-410f-af9c-8e47ed4d0250\") " Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.934627 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4cb1b23f-5500-410f-af9c-8e47ed4d0250-config-data-custom\") pod \"4cb1b23f-5500-410f-af9c-8e47ed4d0250\" (UID: \"4cb1b23f-5500-410f-af9c-8e47ed4d0250\") " Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.934851 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cb1b23f-5500-410f-af9c-8e47ed4d0250-combined-ca-bundle\") pod \"4cb1b23f-5500-410f-af9c-8e47ed4d0250\" (UID: \"4cb1b23f-5500-410f-af9c-8e47ed4d0250\") " Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.934959 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cb1b23f-5500-410f-af9c-8e47ed4d0250-logs\") pod \"4cb1b23f-5500-410f-af9c-8e47ed4d0250\" (UID: \"4cb1b23f-5500-410f-af9c-8e47ed4d0250\") " Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.935090 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bg295\" (UniqueName: \"kubernetes.io/projected/4cb1b23f-5500-410f-af9c-8e47ed4d0250-kube-api-access-bg295\") pod \"4cb1b23f-5500-410f-af9c-8e47ed4d0250\" (UID: \"4cb1b23f-5500-410f-af9c-8e47ed4d0250\") " Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.938270 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cb1b23f-5500-410f-af9c-8e47ed4d0250-logs" (OuterVolumeSpecName: "logs") pod "4cb1b23f-5500-410f-af9c-8e47ed4d0250" (UID: "4cb1b23f-5500-410f-af9c-8e47ed4d0250"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.942863 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cb1b23f-5500-410f-af9c-8e47ed4d0250-kube-api-access-bg295" (OuterVolumeSpecName: "kube-api-access-bg295") pod "4cb1b23f-5500-410f-af9c-8e47ed4d0250" (UID: "4cb1b23f-5500-410f-af9c-8e47ed4d0250"). InnerVolumeSpecName "kube-api-access-bg295". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.943630 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cb1b23f-5500-410f-af9c-8e47ed4d0250-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4cb1b23f-5500-410f-af9c-8e47ed4d0250" (UID: "4cb1b23f-5500-410f-af9c-8e47ed4d0250"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:21 crc kubenswrapper[4811]: I0219 10:28:21.983201 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cb1b23f-5500-410f-af9c-8e47ed4d0250-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4cb1b23f-5500-410f-af9c-8e47ed4d0250" (UID: "4cb1b23f-5500-410f-af9c-8e47ed4d0250"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:22 crc kubenswrapper[4811]: I0219 10:28:22.027618 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cb1b23f-5500-410f-af9c-8e47ed4d0250-config-data" (OuterVolumeSpecName: "config-data") pod "4cb1b23f-5500-410f-af9c-8e47ed4d0250" (UID: "4cb1b23f-5500-410f-af9c-8e47ed4d0250"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:22 crc kubenswrapper[4811]: I0219 10:28:22.042009 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bg295\" (UniqueName: \"kubernetes.io/projected/4cb1b23f-5500-410f-af9c-8e47ed4d0250-kube-api-access-bg295\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:22 crc kubenswrapper[4811]: I0219 10:28:22.042066 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cb1b23f-5500-410f-af9c-8e47ed4d0250-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:22 crc kubenswrapper[4811]: I0219 10:28:22.042080 4811 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4cb1b23f-5500-410f-af9c-8e47ed4d0250-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:22 crc kubenswrapper[4811]: I0219 10:28:22.042091 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cb1b23f-5500-410f-af9c-8e47ed4d0250-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:22 crc kubenswrapper[4811]: I0219 10:28:22.042107 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cb1b23f-5500-410f-af9c-8e47ed4d0250-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:22 crc kubenswrapper[4811]: I0219 10:28:22.075416 4811 generic.go:334] "Generic (PLEG): container finished" podID="4cb1b23f-5500-410f-af9c-8e47ed4d0250" containerID="669a26d64323bd95297ac199a63c329db830146faa04dfda34bb1fd46c4b1094" exitCode=0 Feb 19 10:28:22 crc kubenswrapper[4811]: I0219 10:28:22.075502 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75675fb4db-4qcj7" event={"ID":"4cb1b23f-5500-410f-af9c-8e47ed4d0250","Type":"ContainerDied","Data":"669a26d64323bd95297ac199a63c329db830146faa04dfda34bb1fd46c4b1094"} Feb 19 10:28:22 crc kubenswrapper[4811]: I0219 10:28:22.075544 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75675fb4db-4qcj7" event={"ID":"4cb1b23f-5500-410f-af9c-8e47ed4d0250","Type":"ContainerDied","Data":"a234e7ac45ea2aa0a3b8911c565989208efec6e720549f05e154350fb5801bd8"} Feb 19 10:28:22 crc kubenswrapper[4811]: I0219 10:28:22.075567 4811 scope.go:117] "RemoveContainer" containerID="669a26d64323bd95297ac199a63c329db830146faa04dfda34bb1fd46c4b1094" Feb 19 10:28:22 crc kubenswrapper[4811]: I0219 10:28:22.075744 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75675fb4db-4qcj7" Feb 19 10:28:22 crc kubenswrapper[4811]: I0219 10:28:22.150809 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-75675fb4db-4qcj7"] Feb 19 10:28:22 crc kubenswrapper[4811]: I0219 10:28:22.160175 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-75675fb4db-4qcj7"] Feb 19 10:28:22 crc kubenswrapper[4811]: I0219 10:28:22.190931 4811 scope.go:117] "RemoveContainer" containerID="d9f207c98146fdc8046629b93ab68c689058e595bc876f78079886f61b4393fc" Feb 19 10:28:22 crc kubenswrapper[4811]: I0219 10:28:22.250573 4811 scope.go:117] "RemoveContainer" containerID="669a26d64323bd95297ac199a63c329db830146faa04dfda34bb1fd46c4b1094" Feb 19 10:28:22 crc kubenswrapper[4811]: E0219 10:28:22.254222 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"669a26d64323bd95297ac199a63c329db830146faa04dfda34bb1fd46c4b1094\": container with ID starting with 669a26d64323bd95297ac199a63c329db830146faa04dfda34bb1fd46c4b1094 not found: ID does not exist" containerID="669a26d64323bd95297ac199a63c329db830146faa04dfda34bb1fd46c4b1094" Feb 19 10:28:22 crc kubenswrapper[4811]: I0219 10:28:22.254265 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"669a26d64323bd95297ac199a63c329db830146faa04dfda34bb1fd46c4b1094"} err="failed to get container status \"669a26d64323bd95297ac199a63c329db830146faa04dfda34bb1fd46c4b1094\": rpc error: code = NotFound desc = could not find container \"669a26d64323bd95297ac199a63c329db830146faa04dfda34bb1fd46c4b1094\": container with ID starting with 669a26d64323bd95297ac199a63c329db830146faa04dfda34bb1fd46c4b1094 not found: ID does not exist" Feb 19 10:28:22 crc kubenswrapper[4811]: I0219 10:28:22.254298 4811 scope.go:117] "RemoveContainer" containerID="d9f207c98146fdc8046629b93ab68c689058e595bc876f78079886f61b4393fc" Feb 19 10:28:22 crc kubenswrapper[4811]: E0219 10:28:22.257158 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9f207c98146fdc8046629b93ab68c689058e595bc876f78079886f61b4393fc\": container with ID starting with d9f207c98146fdc8046629b93ab68c689058e595bc876f78079886f61b4393fc not found: ID does not exist" containerID="d9f207c98146fdc8046629b93ab68c689058e595bc876f78079886f61b4393fc" Feb 19 10:28:22 crc kubenswrapper[4811]: I0219 10:28:22.257187 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9f207c98146fdc8046629b93ab68c689058e595bc876f78079886f61b4393fc"} err="failed to get container status \"d9f207c98146fdc8046629b93ab68c689058e595bc876f78079886f61b4393fc\": rpc error: code = NotFound desc = could not find container \"d9f207c98146fdc8046629b93ab68c689058e595bc876f78079886f61b4393fc\": container with ID starting with d9f207c98146fdc8046629b93ab68c689058e595bc876f78079886f61b4393fc not found: ID does not exist" Feb 19 10:28:22 crc kubenswrapper[4811]: I0219 10:28:22.470905 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 10:28:22 crc kubenswrapper[4811]: W0219 10:28:22.486684 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fb1d9a8_09df_4b01_bb55_ab0a032cd0ed.slice/crio-10b5acbc81d8c4b11cfe9e21658c2d910632b80212457815988a05f532124c47 WatchSource:0}: Error finding container 10b5acbc81d8c4b11cfe9e21658c2d910632b80212457815988a05f532124c47: Status 404 returned error can't find the container with id 10b5acbc81d8c4b11cfe9e21658c2d910632b80212457815988a05f532124c47 Feb 19 10:28:22 crc kubenswrapper[4811]: I0219 10:28:22.612461 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cb1b23f-5500-410f-af9c-8e47ed4d0250" path="/var/lib/kubelet/pods/4cb1b23f-5500-410f-af9c-8e47ed4d0250/volumes" Feb 19 10:28:22 crc kubenswrapper[4811]: I0219 10:28:22.613238 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e15f023-b95a-479c-a3b5-902a4b77ff7a" path="/var/lib/kubelet/pods/5e15f023-b95a-479c-a3b5-902a4b77ff7a/volumes" Feb 19 10:28:23 crc kubenswrapper[4811]: I0219 10:28:23.109261 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed","Type":"ContainerStarted","Data":"10b5acbc81d8c4b11cfe9e21658c2d910632b80212457815988a05f532124c47"} Feb 19 10:28:24 crc kubenswrapper[4811]: I0219 10:28:24.131479 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed","Type":"ContainerStarted","Data":"87cbaa1d6e4134831b1c40f96804686f479af3028d3d10f2c022373fdbecb5cf"} Feb 19 10:28:24 crc kubenswrapper[4811]: I0219 10:28:24.131983 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed","Type":"ContainerStarted","Data":"1b3fd2e63185397f7fbd0c5d5b0c76c0d5e0d811b93f207b55ef729929782d47"} Feb 19 10:28:24 crc kubenswrapper[4811]: I0219 10:28:24.177949 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.177907232 podStartE2EDuration="3.177907232s" podCreationTimestamp="2026-02-19 10:28:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:28:24.173604412 +0000 UTC m=+1192.700069098" watchObservedRunningTime="2026-02-19 10:28:24.177907232 +0000 UTC m=+1192.704371918" Feb 19 10:28:26 crc kubenswrapper[4811]: I0219 10:28:26.154022 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 19 10:28:26 crc kubenswrapper[4811]: I0219 10:28:26.768971 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 19 10:28:26 crc kubenswrapper[4811]: I0219 10:28:26.787845 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:28:26 crc kubenswrapper[4811]: I0219 10:28:26.787926 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.589342 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-666bdb77b9-x6mrc"] Feb 19 10:28:27 crc kubenswrapper[4811]: E0219 10:28:27.589741 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cb1b23f-5500-410f-af9c-8e47ed4d0250" containerName="barbican-api-log" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.589753 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cb1b23f-5500-410f-af9c-8e47ed4d0250" containerName="barbican-api-log" Feb 19 10:28:27 crc kubenswrapper[4811]: E0219 10:28:27.589782 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cb1b23f-5500-410f-af9c-8e47ed4d0250" containerName="barbican-api" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.589788 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cb1b23f-5500-410f-af9c-8e47ed4d0250" containerName="barbican-api" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.589987 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cb1b23f-5500-410f-af9c-8e47ed4d0250" containerName="barbican-api" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.590002 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cb1b23f-5500-410f-af9c-8e47ed4d0250" containerName="barbican-api-log" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.590892 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-666bdb77b9-x6mrc" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.594244 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.595404 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.596811 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.614998 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-666bdb77b9-x6mrc"] Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.682163 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0beeae9e-3b17-4691-8273-95fbc4be85fb-run-httpd\") pod \"swift-proxy-666bdb77b9-x6mrc\" (UID: \"0beeae9e-3b17-4691-8273-95fbc4be85fb\") " pod="openstack/swift-proxy-666bdb77b9-x6mrc" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.682248 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0beeae9e-3b17-4691-8273-95fbc4be85fb-config-data\") pod \"swift-proxy-666bdb77b9-x6mrc\" (UID: \"0beeae9e-3b17-4691-8273-95fbc4be85fb\") " pod="openstack/swift-proxy-666bdb77b9-x6mrc" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.682987 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0beeae9e-3b17-4691-8273-95fbc4be85fb-public-tls-certs\") pod \"swift-proxy-666bdb77b9-x6mrc\" (UID: \"0beeae9e-3b17-4691-8273-95fbc4be85fb\") " pod="openstack/swift-proxy-666bdb77b9-x6mrc" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.683042 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2x6z\" (UniqueName: \"kubernetes.io/projected/0beeae9e-3b17-4691-8273-95fbc4be85fb-kube-api-access-k2x6z\") pod \"swift-proxy-666bdb77b9-x6mrc\" (UID: \"0beeae9e-3b17-4691-8273-95fbc4be85fb\") " pod="openstack/swift-proxy-666bdb77b9-x6mrc" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.683094 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0beeae9e-3b17-4691-8273-95fbc4be85fb-log-httpd\") pod \"swift-proxy-666bdb77b9-x6mrc\" (UID: \"0beeae9e-3b17-4691-8273-95fbc4be85fb\") " pod="openstack/swift-proxy-666bdb77b9-x6mrc" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.683157 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0beeae9e-3b17-4691-8273-95fbc4be85fb-internal-tls-certs\") pod \"swift-proxy-666bdb77b9-x6mrc\" (UID: \"0beeae9e-3b17-4691-8273-95fbc4be85fb\") " pod="openstack/swift-proxy-666bdb77b9-x6mrc" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.683212 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0beeae9e-3b17-4691-8273-95fbc4be85fb-etc-swift\") pod \"swift-proxy-666bdb77b9-x6mrc\" (UID: \"0beeae9e-3b17-4691-8273-95fbc4be85fb\") " pod="openstack/swift-proxy-666bdb77b9-x6mrc" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.683241 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0beeae9e-3b17-4691-8273-95fbc4be85fb-combined-ca-bundle\") pod \"swift-proxy-666bdb77b9-x6mrc\" (UID: \"0beeae9e-3b17-4691-8273-95fbc4be85fb\") " pod="openstack/swift-proxy-666bdb77b9-x6mrc" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.775803 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-kpczv"] Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.778161 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-kpczv" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.785761 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0beeae9e-3b17-4691-8273-95fbc4be85fb-run-httpd\") pod \"swift-proxy-666bdb77b9-x6mrc\" (UID: \"0beeae9e-3b17-4691-8273-95fbc4be85fb\") " pod="openstack/swift-proxy-666bdb77b9-x6mrc" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.785866 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0beeae9e-3b17-4691-8273-95fbc4be85fb-config-data\") pod \"swift-proxy-666bdb77b9-x6mrc\" (UID: \"0beeae9e-3b17-4691-8273-95fbc4be85fb\") " pod="openstack/swift-proxy-666bdb77b9-x6mrc" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.786015 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0beeae9e-3b17-4691-8273-95fbc4be85fb-public-tls-certs\") pod \"swift-proxy-666bdb77b9-x6mrc\" (UID: \"0beeae9e-3b17-4691-8273-95fbc4be85fb\") " pod="openstack/swift-proxy-666bdb77b9-x6mrc" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.786060 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2x6z\" (UniqueName: \"kubernetes.io/projected/0beeae9e-3b17-4691-8273-95fbc4be85fb-kube-api-access-k2x6z\") pod \"swift-proxy-666bdb77b9-x6mrc\" (UID: \"0beeae9e-3b17-4691-8273-95fbc4be85fb\") " pod="openstack/swift-proxy-666bdb77b9-x6mrc" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.786114 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0beeae9e-3b17-4691-8273-95fbc4be85fb-log-httpd\") pod \"swift-proxy-666bdb77b9-x6mrc\" (UID: \"0beeae9e-3b17-4691-8273-95fbc4be85fb\") " pod="openstack/swift-proxy-666bdb77b9-x6mrc" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.786176 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0beeae9e-3b17-4691-8273-95fbc4be85fb-internal-tls-certs\") pod \"swift-proxy-666bdb77b9-x6mrc\" (UID: \"0beeae9e-3b17-4691-8273-95fbc4be85fb\") " pod="openstack/swift-proxy-666bdb77b9-x6mrc" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.786226 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0beeae9e-3b17-4691-8273-95fbc4be85fb-etc-swift\") pod \"swift-proxy-666bdb77b9-x6mrc\" (UID: \"0beeae9e-3b17-4691-8273-95fbc4be85fb\") " pod="openstack/swift-proxy-666bdb77b9-x6mrc" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.786257 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0beeae9e-3b17-4691-8273-95fbc4be85fb-combined-ca-bundle\") pod \"swift-proxy-666bdb77b9-x6mrc\" (UID: \"0beeae9e-3b17-4691-8273-95fbc4be85fb\") " pod="openstack/swift-proxy-666bdb77b9-x6mrc" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.787747 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0beeae9e-3b17-4691-8273-95fbc4be85fb-run-httpd\") pod \"swift-proxy-666bdb77b9-x6mrc\" (UID: \"0beeae9e-3b17-4691-8273-95fbc4be85fb\") " pod="openstack/swift-proxy-666bdb77b9-x6mrc" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.791641 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0beeae9e-3b17-4691-8273-95fbc4be85fb-log-httpd\") pod \"swift-proxy-666bdb77b9-x6mrc\" (UID: \"0beeae9e-3b17-4691-8273-95fbc4be85fb\") " pod="openstack/swift-proxy-666bdb77b9-x6mrc" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.799946 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0beeae9e-3b17-4691-8273-95fbc4be85fb-etc-swift\") pod \"swift-proxy-666bdb77b9-x6mrc\" (UID: \"0beeae9e-3b17-4691-8273-95fbc4be85fb\") " pod="openstack/swift-proxy-666bdb77b9-x6mrc" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.806009 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0beeae9e-3b17-4691-8273-95fbc4be85fb-config-data\") pod \"swift-proxy-666bdb77b9-x6mrc\" (UID: \"0beeae9e-3b17-4691-8273-95fbc4be85fb\") " pod="openstack/swift-proxy-666bdb77b9-x6mrc" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.806675 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0beeae9e-3b17-4691-8273-95fbc4be85fb-public-tls-certs\") pod \"swift-proxy-666bdb77b9-x6mrc\" (UID: \"0beeae9e-3b17-4691-8273-95fbc4be85fb\") " pod="openstack/swift-proxy-666bdb77b9-x6mrc" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.814039 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0beeae9e-3b17-4691-8273-95fbc4be85fb-internal-tls-certs\") pod \"swift-proxy-666bdb77b9-x6mrc\" (UID: \"0beeae9e-3b17-4691-8273-95fbc4be85fb\") " pod="openstack/swift-proxy-666bdb77b9-x6mrc" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.832281 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0beeae9e-3b17-4691-8273-95fbc4be85fb-combined-ca-bundle\") pod \"swift-proxy-666bdb77b9-x6mrc\" (UID: \"0beeae9e-3b17-4691-8273-95fbc4be85fb\") " pod="openstack/swift-proxy-666bdb77b9-x6mrc" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.856564 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2x6z\" (UniqueName: \"kubernetes.io/projected/0beeae9e-3b17-4691-8273-95fbc4be85fb-kube-api-access-k2x6z\") pod \"swift-proxy-666bdb77b9-x6mrc\" (UID: \"0beeae9e-3b17-4691-8273-95fbc4be85fb\") " pod="openstack/swift-proxy-666bdb77b9-x6mrc" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.876830 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-kpczv"] Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.891076 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67ljk\" (UniqueName: \"kubernetes.io/projected/3dfb180a-094f-4716-b7b8-a50cb4841cc9-kube-api-access-67ljk\") pod \"nova-api-db-create-kpczv\" (UID: \"3dfb180a-094f-4716-b7b8-a50cb4841cc9\") " pod="openstack/nova-api-db-create-kpczv" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.891498 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dfb180a-094f-4716-b7b8-a50cb4841cc9-operator-scripts\") pod \"nova-api-db-create-kpczv\" (UID: \"3dfb180a-094f-4716-b7b8-a50cb4841cc9\") " pod="openstack/nova-api-db-create-kpczv" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.925032 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-666bdb77b9-x6mrc" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.991540 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-3102-account-create-update-qk57q"] Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.993309 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3102-account-create-update-qk57q" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.997281 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dfb180a-094f-4716-b7b8-a50cb4841cc9-operator-scripts\") pod \"nova-api-db-create-kpczv\" (UID: \"3dfb180a-094f-4716-b7b8-a50cb4841cc9\") " pod="openstack/nova-api-db-create-kpczv" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.997389 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67ljk\" (UniqueName: \"kubernetes.io/projected/3dfb180a-094f-4716-b7b8-a50cb4841cc9-kube-api-access-67ljk\") pod \"nova-api-db-create-kpczv\" (UID: \"3dfb180a-094f-4716-b7b8-a50cb4841cc9\") " pod="openstack/nova-api-db-create-kpczv" Feb 19 10:28:27 crc kubenswrapper[4811]: I0219 10:28:27.998522 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dfb180a-094f-4716-b7b8-a50cb4841cc9-operator-scripts\") pod \"nova-api-db-create-kpczv\" (UID: \"3dfb180a-094f-4716-b7b8-a50cb4841cc9\") " pod="openstack/nova-api-db-create-kpczv" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.003552 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.017146 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-3102-account-create-update-qk57q"] Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.028543 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-tc999"] Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.030143 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-tc999" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.066349 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67ljk\" (UniqueName: \"kubernetes.io/projected/3dfb180a-094f-4716-b7b8-a50cb4841cc9-kube-api-access-67ljk\") pod \"nova-api-db-create-kpczv\" (UID: \"3dfb180a-094f-4716-b7b8-a50cb4841cc9\") " pod="openstack/nova-api-db-create-kpczv" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.071244 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-tc999"] Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.103893 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5aea467f-47c2-470e-a5db-2c3146a21658-operator-scripts\") pod \"nova-cell0-db-create-tc999\" (UID: \"5aea467f-47c2-470e-a5db-2c3146a21658\") " pod="openstack/nova-cell0-db-create-tc999" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.104161 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc86j\" (UniqueName: \"kubernetes.io/projected/5aea467f-47c2-470e-a5db-2c3146a21658-kube-api-access-zc86j\") pod \"nova-cell0-db-create-tc999\" (UID: \"5aea467f-47c2-470e-a5db-2c3146a21658\") " pod="openstack/nova-cell0-db-create-tc999" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.104227 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpn46\" (UniqueName: \"kubernetes.io/projected/09d3635d-5cd3-4cbb-a214-5777c6b95a3e-kube-api-access-zpn46\") pod \"nova-api-3102-account-create-update-qk57q\" (UID: \"09d3635d-5cd3-4cbb-a214-5777c6b95a3e\") " pod="openstack/nova-api-3102-account-create-update-qk57q" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.104310 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09d3635d-5cd3-4cbb-a214-5777c6b95a3e-operator-scripts\") pod \"nova-api-3102-account-create-update-qk57q\" (UID: \"09d3635d-5cd3-4cbb-a214-5777c6b95a3e\") " pod="openstack/nova-api-3102-account-create-update-qk57q" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.181463 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-q6s4v"] Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.183221 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-q6s4v" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.206005 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5aea467f-47c2-470e-a5db-2c3146a21658-operator-scripts\") pod \"nova-cell0-db-create-tc999\" (UID: \"5aea467f-47c2-470e-a5db-2c3146a21658\") " pod="openstack/nova-cell0-db-create-tc999" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.206101 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/409ceda0-e9b0-4d91-8a2b-827d68b00483-operator-scripts\") pod \"nova-cell1-db-create-q6s4v\" (UID: \"409ceda0-e9b0-4d91-8a2b-827d68b00483\") " pod="openstack/nova-cell1-db-create-q6s4v" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.206208 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7hr2\" (UniqueName: \"kubernetes.io/projected/409ceda0-e9b0-4d91-8a2b-827d68b00483-kube-api-access-l7hr2\") pod \"nova-cell1-db-create-q6s4v\" (UID: \"409ceda0-e9b0-4d91-8a2b-827d68b00483\") " pod="openstack/nova-cell1-db-create-q6s4v" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.206274 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc86j\" (UniqueName: \"kubernetes.io/projected/5aea467f-47c2-470e-a5db-2c3146a21658-kube-api-access-zc86j\") pod \"nova-cell0-db-create-tc999\" (UID: \"5aea467f-47c2-470e-a5db-2c3146a21658\") " pod="openstack/nova-cell0-db-create-tc999" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.206311 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpn46\" (UniqueName: \"kubernetes.io/projected/09d3635d-5cd3-4cbb-a214-5777c6b95a3e-kube-api-access-zpn46\") pod \"nova-api-3102-account-create-update-qk57q\" (UID: \"09d3635d-5cd3-4cbb-a214-5777c6b95a3e\") " pod="openstack/nova-api-3102-account-create-update-qk57q" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.206342 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09d3635d-5cd3-4cbb-a214-5777c6b95a3e-operator-scripts\") pod \"nova-api-3102-account-create-update-qk57q\" (UID: \"09d3635d-5cd3-4cbb-a214-5777c6b95a3e\") " pod="openstack/nova-api-3102-account-create-update-qk57q" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.207178 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09d3635d-5cd3-4cbb-a214-5777c6b95a3e-operator-scripts\") pod \"nova-api-3102-account-create-update-qk57q\" (UID: \"09d3635d-5cd3-4cbb-a214-5777c6b95a3e\") " pod="openstack/nova-api-3102-account-create-update-qk57q" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.207387 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5aea467f-47c2-470e-a5db-2c3146a21658-operator-scripts\") pod \"nova-cell0-db-create-tc999\" (UID: \"5aea467f-47c2-470e-a5db-2c3146a21658\") " pod="openstack/nova-cell0-db-create-tc999" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.226695 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-q6s4v"] Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.235511 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-kpczv" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.239172 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc86j\" (UniqueName: \"kubernetes.io/projected/5aea467f-47c2-470e-a5db-2c3146a21658-kube-api-access-zc86j\") pod \"nova-cell0-db-create-tc999\" (UID: \"5aea467f-47c2-470e-a5db-2c3146a21658\") " pod="openstack/nova-cell0-db-create-tc999" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.247476 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-2552-account-create-update-7qdk6"] Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.249047 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2552-account-create-update-7qdk6" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.259395 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpn46\" (UniqueName: \"kubernetes.io/projected/09d3635d-5cd3-4cbb-a214-5777c6b95a3e-kube-api-access-zpn46\") pod \"nova-api-3102-account-create-update-qk57q\" (UID: \"09d3635d-5cd3-4cbb-a214-5777c6b95a3e\") " pod="openstack/nova-api-3102-account-create-update-qk57q" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.259772 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.303791 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-2552-account-create-update-7qdk6"] Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.307843 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9be9489-aa33-45fd-881f-7b8da8a5022c-operator-scripts\") pod \"nova-cell0-2552-account-create-update-7qdk6\" (UID: \"a9be9489-aa33-45fd-881f-7b8da8a5022c\") " pod="openstack/nova-cell0-2552-account-create-update-7qdk6" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.307939 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcr2k\" (UniqueName: \"kubernetes.io/projected/a9be9489-aa33-45fd-881f-7b8da8a5022c-kube-api-access-pcr2k\") pod \"nova-cell0-2552-account-create-update-7qdk6\" (UID: \"a9be9489-aa33-45fd-881f-7b8da8a5022c\") " pod="openstack/nova-cell0-2552-account-create-update-7qdk6" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.307969 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/409ceda0-e9b0-4d91-8a2b-827d68b00483-operator-scripts\") pod \"nova-cell1-db-create-q6s4v\" (UID: \"409ceda0-e9b0-4d91-8a2b-827d68b00483\") " pod="openstack/nova-cell1-db-create-q6s4v" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.308037 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7hr2\" (UniqueName: \"kubernetes.io/projected/409ceda0-e9b0-4d91-8a2b-827d68b00483-kube-api-access-l7hr2\") pod \"nova-cell1-db-create-q6s4v\" (UID: \"409ceda0-e9b0-4d91-8a2b-827d68b00483\") " pod="openstack/nova-cell1-db-create-q6s4v" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.309050 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/409ceda0-e9b0-4d91-8a2b-827d68b00483-operator-scripts\") pod \"nova-cell1-db-create-q6s4v\" (UID: \"409ceda0-e9b0-4d91-8a2b-827d68b00483\") " pod="openstack/nova-cell1-db-create-q6s4v" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.328542 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3102-account-create-update-qk57q" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.333084 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7hr2\" (UniqueName: \"kubernetes.io/projected/409ceda0-e9b0-4d91-8a2b-827d68b00483-kube-api-access-l7hr2\") pod \"nova-cell1-db-create-q6s4v\" (UID: \"409ceda0-e9b0-4d91-8a2b-827d68b00483\") " pod="openstack/nova-cell1-db-create-q6s4v" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.362529 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-1d9e-account-create-update-rb5mq"] Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.363775 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1d9e-account-create-update-rb5mq" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.368900 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.389607 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-1d9e-account-create-update-rb5mq"] Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.412855 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcr2k\" (UniqueName: \"kubernetes.io/projected/a9be9489-aa33-45fd-881f-7b8da8a5022c-kube-api-access-pcr2k\") pod \"nova-cell0-2552-account-create-update-7qdk6\" (UID: \"a9be9489-aa33-45fd-881f-7b8da8a5022c\") " pod="openstack/nova-cell0-2552-account-create-update-7qdk6" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.413113 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78de4cdd-5be9-44ae-b1b9-783bf1279fae-operator-scripts\") pod \"nova-cell1-1d9e-account-create-update-rb5mq\" (UID: \"78de4cdd-5be9-44ae-b1b9-783bf1279fae\") " pod="openstack/nova-cell1-1d9e-account-create-update-rb5mq" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.413198 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9be9489-aa33-45fd-881f-7b8da8a5022c-operator-scripts\") pod \"nova-cell0-2552-account-create-update-7qdk6\" (UID: \"a9be9489-aa33-45fd-881f-7b8da8a5022c\") " pod="openstack/nova-cell0-2552-account-create-update-7qdk6" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.413481 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz2wc\" (UniqueName: \"kubernetes.io/projected/78de4cdd-5be9-44ae-b1b9-783bf1279fae-kube-api-access-pz2wc\") pod \"nova-cell1-1d9e-account-create-update-rb5mq\" (UID: \"78de4cdd-5be9-44ae-b1b9-783bf1279fae\") " pod="openstack/nova-cell1-1d9e-account-create-update-rb5mq" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.414469 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9be9489-aa33-45fd-881f-7b8da8a5022c-operator-scripts\") pod \"nova-cell0-2552-account-create-update-7qdk6\" (UID: \"a9be9489-aa33-45fd-881f-7b8da8a5022c\") " pod="openstack/nova-cell0-2552-account-create-update-7qdk6" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.433898 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcr2k\" (UniqueName: \"kubernetes.io/projected/a9be9489-aa33-45fd-881f-7b8da8a5022c-kube-api-access-pcr2k\") pod \"nova-cell0-2552-account-create-update-7qdk6\" (UID: \"a9be9489-aa33-45fd-881f-7b8da8a5022c\") " pod="openstack/nova-cell0-2552-account-create-update-7qdk6" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.451833 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-tc999" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.516991 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78de4cdd-5be9-44ae-b1b9-783bf1279fae-operator-scripts\") pod \"nova-cell1-1d9e-account-create-update-rb5mq\" (UID: \"78de4cdd-5be9-44ae-b1b9-783bf1279fae\") " pod="openstack/nova-cell1-1d9e-account-create-update-rb5mq" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.517212 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz2wc\" (UniqueName: \"kubernetes.io/projected/78de4cdd-5be9-44ae-b1b9-783bf1279fae-kube-api-access-pz2wc\") pod \"nova-cell1-1d9e-account-create-update-rb5mq\" (UID: \"78de4cdd-5be9-44ae-b1b9-783bf1279fae\") " pod="openstack/nova-cell1-1d9e-account-create-update-rb5mq" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.519029 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78de4cdd-5be9-44ae-b1b9-783bf1279fae-operator-scripts\") pod \"nova-cell1-1d9e-account-create-update-rb5mq\" (UID: \"78de4cdd-5be9-44ae-b1b9-783bf1279fae\") " pod="openstack/nova-cell1-1d9e-account-create-update-rb5mq" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.528611 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-q6s4v" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.535378 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz2wc\" (UniqueName: \"kubernetes.io/projected/78de4cdd-5be9-44ae-b1b9-783bf1279fae-kube-api-access-pz2wc\") pod \"nova-cell1-1d9e-account-create-update-rb5mq\" (UID: \"78de4cdd-5be9-44ae-b1b9-783bf1279fae\") " pod="openstack/nova-cell1-1d9e-account-create-update-rb5mq" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.677612 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2552-account-create-update-7qdk6" Feb 19 10:28:28 crc kubenswrapper[4811]: I0219 10:28:28.704094 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1d9e-account-create-update-rb5mq" Feb 19 10:28:29 crc kubenswrapper[4811]: I0219 10:28:29.681138 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:28:29 crc kubenswrapper[4811]: I0219 10:28:29.689027 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6c301fbe-f93a-4b37-bc7b-3f226b08a6ba" containerName="ceilometer-central-agent" containerID="cri-o://7d68b6f2f543e678bc73abea5afe62d8dc40cd4028d176580ac1b1c6ed41f325" gracePeriod=30 Feb 19 10:28:29 crc kubenswrapper[4811]: I0219 10:28:29.689203 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6c301fbe-f93a-4b37-bc7b-3f226b08a6ba" containerName="proxy-httpd" containerID="cri-o://8ab88e5f9e00222e95e08ed019c3e8f7ed106fbe8565e66d98104ef4839f0e11" gracePeriod=30 Feb 19 10:28:29 crc kubenswrapper[4811]: I0219 10:28:29.689323 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6c301fbe-f93a-4b37-bc7b-3f226b08a6ba" containerName="sg-core" containerID="cri-o://8433db333c723c8a14bcbd03ad4d846e7838bc0b6c830f98613eea3b083f3593" gracePeriod=30 Feb 19 10:28:29 crc kubenswrapper[4811]: I0219 10:28:29.689334 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6c301fbe-f93a-4b37-bc7b-3f226b08a6ba" containerName="ceilometer-notification-agent" containerID="cri-o://9eb04e82a89034fe952c9299ef9a8bad76f092f8d85bf1158dcce295b10a9d7a" gracePeriod=30 Feb 19 10:28:29 crc kubenswrapper[4811]: I0219 10:28:29.703918 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 19 10:28:30 crc kubenswrapper[4811]: I0219 10:28:30.223256 4811 generic.go:334] "Generic (PLEG): container finished" podID="6c301fbe-f93a-4b37-bc7b-3f226b08a6ba" containerID="8ab88e5f9e00222e95e08ed019c3e8f7ed106fbe8565e66d98104ef4839f0e11" exitCode=0 Feb 19 10:28:30 crc kubenswrapper[4811]: I0219 10:28:30.223683 4811 generic.go:334] "Generic (PLEG): container finished" podID="6c301fbe-f93a-4b37-bc7b-3f226b08a6ba" containerID="8433db333c723c8a14bcbd03ad4d846e7838bc0b6c830f98613eea3b083f3593" exitCode=2 Feb 19 10:28:30 crc kubenswrapper[4811]: I0219 10:28:30.223692 4811 generic.go:334] "Generic (PLEG): container finished" podID="6c301fbe-f93a-4b37-bc7b-3f226b08a6ba" containerID="7d68b6f2f543e678bc73abea5afe62d8dc40cd4028d176580ac1b1c6ed41f325" exitCode=0 Feb 19 10:28:30 crc kubenswrapper[4811]: I0219 10:28:30.223718 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba","Type":"ContainerDied","Data":"8ab88e5f9e00222e95e08ed019c3e8f7ed106fbe8565e66d98104ef4839f0e11"} Feb 19 10:28:30 crc kubenswrapper[4811]: I0219 10:28:30.223756 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba","Type":"ContainerDied","Data":"8433db333c723c8a14bcbd03ad4d846e7838bc0b6c830f98613eea3b083f3593"} Feb 19 10:28:30 crc kubenswrapper[4811]: I0219 10:28:30.223767 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba","Type":"ContainerDied","Data":"7d68b6f2f543e678bc73abea5afe62d8dc40cd4028d176580ac1b1c6ed41f325"} Feb 19 10:28:32 crc kubenswrapper[4811]: I0219 10:28:32.106524 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 19 10:28:32 crc kubenswrapper[4811]: I0219 10:28:32.254274 4811 generic.go:334] "Generic (PLEG): container finished" podID="6c301fbe-f93a-4b37-bc7b-3f226b08a6ba" containerID="9eb04e82a89034fe952c9299ef9a8bad76f092f8d85bf1158dcce295b10a9d7a" exitCode=0 Feb 19 10:28:32 crc kubenswrapper[4811]: I0219 10:28:32.254328 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba","Type":"ContainerDied","Data":"9eb04e82a89034fe952c9299ef9a8bad76f092f8d85bf1158dcce295b10a9d7a"} Feb 19 10:28:32 crc kubenswrapper[4811]: I0219 10:28:32.714501 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-599475cf85-sh7k6" Feb 19 10:28:32 crc kubenswrapper[4811]: I0219 10:28:32.783836 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-69cb98ff68-7rhp9"] Feb 19 10:28:32 crc kubenswrapper[4811]: I0219 10:28:32.784111 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-69cb98ff68-7rhp9" podUID="fb83c4af-999b-46bc-89b8-f0909ccb3f14" containerName="neutron-api" containerID="cri-o://0502e3b3a8d1675000ba21adcaca40bffc785f9c9f085592a799ff25edc21f9f" gracePeriod=30 Feb 19 10:28:32 crc kubenswrapper[4811]: I0219 10:28:32.784585 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-69cb98ff68-7rhp9" podUID="fb83c4af-999b-46bc-89b8-f0909ccb3f14" containerName="neutron-httpd" containerID="cri-o://e4551f7eded09ebc9e05e93272a8f42cb6e80ec43d0de70c28e4b442b192d33e" gracePeriod=30 Feb 19 10:28:33 crc kubenswrapper[4811]: I0219 10:28:33.276566 4811 generic.go:334] "Generic (PLEG): container finished" podID="fb83c4af-999b-46bc-89b8-f0909ccb3f14" containerID="e4551f7eded09ebc9e05e93272a8f42cb6e80ec43d0de70c28e4b442b192d33e" exitCode=0 Feb 19 10:28:33 crc kubenswrapper[4811]: I0219 10:28:33.276635 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-69cb98ff68-7rhp9" event={"ID":"fb83c4af-999b-46bc-89b8-f0909ccb3f14","Type":"ContainerDied","Data":"e4551f7eded09ebc9e05e93272a8f42cb6e80ec43d0de70c28e4b442b192d33e"} Feb 19 10:28:34 crc kubenswrapper[4811]: I0219 10:28:34.429489 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 10:28:34 crc kubenswrapper[4811]: I0219 10:28:34.430065 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="dc7e3241-6036-4ef1-9bba-1621a28623a7" containerName="kube-state-metrics" containerID="cri-o://bb0dd2f210291ef068a155356f790adfeac0762f139e1e84d4bbec15669edc44" gracePeriod=30 Feb 19 10:28:35 crc kubenswrapper[4811]: I0219 10:28:35.309384 4811 generic.go:334] "Generic (PLEG): container finished" podID="dc7e3241-6036-4ef1-9bba-1621a28623a7" containerID="bb0dd2f210291ef068a155356f790adfeac0762f139e1e84d4bbec15669edc44" exitCode=2 Feb 19 10:28:35 crc kubenswrapper[4811]: I0219 10:28:35.309438 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"dc7e3241-6036-4ef1-9bba-1621a28623a7","Type":"ContainerDied","Data":"bb0dd2f210291ef068a155356f790adfeac0762f139e1e84d4bbec15669edc44"} Feb 19 10:28:35 crc kubenswrapper[4811]: I0219 10:28:35.759675 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="dc7e3241-6036-4ef1-9bba-1621a28623a7" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": dial tcp 10.217.0.103:8081: connect: connection refused" Feb 19 10:28:36 crc kubenswrapper[4811]: I0219 10:28:36.340430 4811 generic.go:334] "Generic (PLEG): container finished" podID="19d17501-8d5f-4f9d-8fad-94f48224e910" containerID="3156af09a62658676d66d9a902df8c7b995da32cc4c4fb049d66a9d5276fa9a4" exitCode=137 Feb 19 10:28:36 crc kubenswrapper[4811]: I0219 10:28:36.340572 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-676dcb6496-tnhmt" event={"ID":"19d17501-8d5f-4f9d-8fad-94f48224e910","Type":"ContainerDied","Data":"3156af09a62658676d66d9a902df8c7b995da32cc4c4fb049d66a9d5276fa9a4"} Feb 19 10:28:36 crc kubenswrapper[4811]: I0219 10:28:36.343906 4811 generic.go:334] "Generic (PLEG): container finished" podID="4ab45e9b-6cfc-48a3-b603-f47d47bbd510" containerID="e88cbd6cffe02dfbcfdeed3f6b27917bd198051bdb2fc9298ecc4f011b2d45af" exitCode=137 Feb 19 10:28:36 crc kubenswrapper[4811]: I0219 10:28:36.343985 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-766dfcbd56-lqbrt" event={"ID":"4ab45e9b-6cfc-48a3-b603-f47d47bbd510","Type":"ContainerDied","Data":"e88cbd6cffe02dfbcfdeed3f6b27917bd198051bdb2fc9298ecc4f011b2d45af"} Feb 19 10:28:36 crc kubenswrapper[4811]: I0219 10:28:36.584896 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 10:28:36 crc kubenswrapper[4811]: I0219 10:28:36.725944 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hghq\" (UniqueName: \"kubernetes.io/projected/dc7e3241-6036-4ef1-9bba-1621a28623a7-kube-api-access-9hghq\") pod \"dc7e3241-6036-4ef1-9bba-1621a28623a7\" (UID: \"dc7e3241-6036-4ef1-9bba-1621a28623a7\") " Feb 19 10:28:36 crc kubenswrapper[4811]: I0219 10:28:36.733895 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc7e3241-6036-4ef1-9bba-1621a28623a7-kube-api-access-9hghq" (OuterVolumeSpecName: "kube-api-access-9hghq") pod "dc7e3241-6036-4ef1-9bba-1621a28623a7" (UID: "dc7e3241-6036-4ef1-9bba-1621a28623a7"). InnerVolumeSpecName "kube-api-access-9hghq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:28:36 crc kubenswrapper[4811]: I0219 10:28:36.774229 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:28:36 crc kubenswrapper[4811]: I0219 10:28:36.831906 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hghq\" (UniqueName: \"kubernetes.io/projected/dc7e3241-6036-4ef1-9bba-1621a28623a7-kube-api-access-9hghq\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:36 crc kubenswrapper[4811]: I0219 10:28:36.933216 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-log-httpd\") pod \"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba\" (UID: \"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba\") " Feb 19 10:28:36 crc kubenswrapper[4811]: I0219 10:28:36.933289 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-sg-core-conf-yaml\") pod \"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba\" (UID: \"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba\") " Feb 19 10:28:36 crc kubenswrapper[4811]: I0219 10:28:36.933352 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-combined-ca-bundle\") pod \"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba\" (UID: \"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba\") " Feb 19 10:28:36 crc kubenswrapper[4811]: I0219 10:28:36.933429 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-scripts\") pod \"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba\" (UID: \"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba\") " Feb 19 10:28:36 crc kubenswrapper[4811]: I0219 10:28:36.933569 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bt5h\" (UniqueName: \"kubernetes.io/projected/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-kube-api-access-6bt5h\") pod \"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba\" (UID: \"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba\") " Feb 19 10:28:36 crc kubenswrapper[4811]: I0219 10:28:36.933624 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-run-httpd\") pod \"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba\" (UID: \"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba\") " Feb 19 10:28:36 crc kubenswrapper[4811]: I0219 10:28:36.933656 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-config-data\") pod \"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba\" (UID: \"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba\") " Feb 19 10:28:36 crc kubenswrapper[4811]: I0219 10:28:36.949801 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6c301fbe-f93a-4b37-bc7b-3f226b08a6ba" (UID: "6c301fbe-f93a-4b37-bc7b-3f226b08a6ba"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:28:36 crc kubenswrapper[4811]: I0219 10:28:36.951657 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6c301fbe-f93a-4b37-bc7b-3f226b08a6ba" (UID: "6c301fbe-f93a-4b37-bc7b-3f226b08a6ba"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:28:36 crc kubenswrapper[4811]: I0219 10:28:36.961675 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-scripts" (OuterVolumeSpecName: "scripts") pod "6c301fbe-f93a-4b37-bc7b-3f226b08a6ba" (UID: "6c301fbe-f93a-4b37-bc7b-3f226b08a6ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:36 crc kubenswrapper[4811]: I0219 10:28:36.975688 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-kube-api-access-6bt5h" (OuterVolumeSpecName: "kube-api-access-6bt5h") pod "6c301fbe-f93a-4b37-bc7b-3f226b08a6ba" (UID: "6c301fbe-f93a-4b37-bc7b-3f226b08a6ba"). InnerVolumeSpecName "kube-api-access-6bt5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.047486 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.047522 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bt5h\" (UniqueName: \"kubernetes.io/projected/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-kube-api-access-6bt5h\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.047536 4811 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.047547 4811 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.063641 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6c301fbe-f93a-4b37-bc7b-3f226b08a6ba" (UID: "6c301fbe-f93a-4b37-bc7b-3f226b08a6ba"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.150047 4811 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.163188 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-config-data" (OuterVolumeSpecName: "config-data") pod "6c301fbe-f93a-4b37-bc7b-3f226b08a6ba" (UID: "6c301fbe-f93a-4b37-bc7b-3f226b08a6ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.231748 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c301fbe-f93a-4b37-bc7b-3f226b08a6ba" (UID: "6c301fbe-f93a-4b37-bc7b-3f226b08a6ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.252318 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.252363 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.263198 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-kpczv"] Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.361013 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c301fbe-f93a-4b37-bc7b-3f226b08a6ba","Type":"ContainerDied","Data":"e1555bf3831f6e08de2830d89cfa4962ed2b5255111a6afc75da4848fea5d9b2"} Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.361069 4811 scope.go:117] "RemoveContainer" containerID="8ab88e5f9e00222e95e08ed019c3e8f7ed106fbe8565e66d98104ef4839f0e11" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.361219 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.368303 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"dc7e3241-6036-4ef1-9bba-1621a28623a7","Type":"ContainerDied","Data":"8d47a8fbbe8206afbb102d09dd2718f57616dcef69806608092736d21788f948"} Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.368501 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.377444 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-kpczv" event={"ID":"3dfb180a-094f-4716-b7b8-a50cb4841cc9","Type":"ContainerStarted","Data":"b7362f8b33a43ad4ffd23f6f0a911033108642f91e97e576026e22454b6a91b7"} Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.382759 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-766dfcbd56-lqbrt" event={"ID":"4ab45e9b-6cfc-48a3-b603-f47d47bbd510","Type":"ContainerStarted","Data":"bfc48b984c2ea255b68016b29a536c04abbfce299b7e71e4dc30275cf1b30bdd"} Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.385688 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-666bdb77b9-x6mrc"] Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.429819 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c6494cd0-c929-4278-974b-0839193092f1","Type":"ContainerStarted","Data":"3d3c965ac54da7977c6f993e8687ef3c54ea08f7eb0bff4f71f2d3d621e13a77"} Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.441070 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-676dcb6496-tnhmt" event={"ID":"19d17501-8d5f-4f9d-8fad-94f48224e910","Type":"ContainerStarted","Data":"5707e09818907daa553ecb2bbef370f05a24e6fbf34159dbdb8fbc4c94a1d731"} Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.471365 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.480387 4811 scope.go:117] "RemoveContainer" containerID="8433db333c723c8a14bcbd03ad4d846e7838bc0b6c830f98613eea3b083f3593" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.510662 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.521968 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:28:37 crc kubenswrapper[4811]: E0219 10:28:37.522544 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c301fbe-f93a-4b37-bc7b-3f226b08a6ba" containerName="proxy-httpd" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.522564 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c301fbe-f93a-4b37-bc7b-3f226b08a6ba" containerName="proxy-httpd" Feb 19 10:28:37 crc kubenswrapper[4811]: E0219 10:28:37.522582 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c301fbe-f93a-4b37-bc7b-3f226b08a6ba" containerName="ceilometer-notification-agent" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.522590 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c301fbe-f93a-4b37-bc7b-3f226b08a6ba" containerName="ceilometer-notification-agent" Feb 19 10:28:37 crc kubenswrapper[4811]: E0219 10:28:37.522619 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c301fbe-f93a-4b37-bc7b-3f226b08a6ba" containerName="sg-core" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.522626 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c301fbe-f93a-4b37-bc7b-3f226b08a6ba" containerName="sg-core" Feb 19 10:28:37 crc kubenswrapper[4811]: E0219 10:28:37.522636 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c301fbe-f93a-4b37-bc7b-3f226b08a6ba" containerName="ceilometer-central-agent" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.522643 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c301fbe-f93a-4b37-bc7b-3f226b08a6ba" containerName="ceilometer-central-agent" Feb 19 10:28:37 crc kubenswrapper[4811]: E0219 10:28:37.522652 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc7e3241-6036-4ef1-9bba-1621a28623a7" containerName="kube-state-metrics" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.522660 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc7e3241-6036-4ef1-9bba-1621a28623a7" containerName="kube-state-metrics" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.522872 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c301fbe-f93a-4b37-bc7b-3f226b08a6ba" containerName="proxy-httpd" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.522889 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc7e3241-6036-4ef1-9bba-1621a28623a7" containerName="kube-state-metrics" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.522917 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c301fbe-f93a-4b37-bc7b-3f226b08a6ba" containerName="ceilometer-central-agent" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.522924 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c301fbe-f93a-4b37-bc7b-3f226b08a6ba" containerName="sg-core" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.522935 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c301fbe-f93a-4b37-bc7b-3f226b08a6ba" containerName="ceilometer-notification-agent" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.525458 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.531295 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.531573 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-m722p" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.535910 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.536225 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.556747 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.653948 4811 scope.go:117] "RemoveContainer" containerID="9eb04e82a89034fe952c9299ef9a8bad76f092f8d85bf1158dcce295b10a9d7a" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.671982 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b8399ed-d9be-43ed-b390-1848772886c4-scripts\") pod \"ceilometer-0\" (UID: \"9b8399ed-d9be-43ed-b390-1848772886c4\") " pod="openstack/ceilometer-0" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.672032 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b8399ed-d9be-43ed-b390-1848772886c4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9b8399ed-d9be-43ed-b390-1848772886c4\") " pod="openstack/ceilometer-0" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.672062 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kcww\" (UniqueName: \"kubernetes.io/projected/9b8399ed-d9be-43ed-b390-1848772886c4-kube-api-access-6kcww\") pod \"ceilometer-0\" (UID: \"9b8399ed-d9be-43ed-b390-1848772886c4\") " pod="openstack/ceilometer-0" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.672138 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b8399ed-d9be-43ed-b390-1848772886c4-run-httpd\") pod \"ceilometer-0\" (UID: \"9b8399ed-d9be-43ed-b390-1848772886c4\") " pod="openstack/ceilometer-0" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.672162 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b8399ed-d9be-43ed-b390-1848772886c4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9b8399ed-d9be-43ed-b390-1848772886c4\") " pod="openstack/ceilometer-0" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.672185 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b8399ed-d9be-43ed-b390-1848772886c4-config-data\") pod \"ceilometer-0\" (UID: \"9b8399ed-d9be-43ed-b390-1848772886c4\") " pod="openstack/ceilometer-0" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.672239 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b8399ed-d9be-43ed-b390-1848772886c4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9b8399ed-d9be-43ed-b390-1848772886c4\") " pod="openstack/ceilometer-0" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.672271 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b8399ed-d9be-43ed-b390-1848772886c4-log-httpd\") pod \"ceilometer-0\" (UID: \"9b8399ed-d9be-43ed-b390-1848772886c4\") " pod="openstack/ceilometer-0" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.741540 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.372764979 podStartE2EDuration="18.74148509s" podCreationTimestamp="2026-02-19 10:28:19 +0000 UTC" firstStartedPulling="2026-02-19 10:28:20.905881698 +0000 UTC m=+1189.432346384" lastFinishedPulling="2026-02-19 10:28:36.274601809 +0000 UTC m=+1204.801066495" observedRunningTime="2026-02-19 10:28:37.516989515 +0000 UTC m=+1206.043454211" watchObservedRunningTime="2026-02-19 10:28:37.74148509 +0000 UTC m=+1206.267949776" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.741873 4811 scope.go:117] "RemoveContainer" containerID="7d68b6f2f543e678bc73abea5afe62d8dc40cd4028d176580ac1b1c6ed41f325" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.774590 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b8399ed-d9be-43ed-b390-1848772886c4-run-httpd\") pod \"ceilometer-0\" (UID: \"9b8399ed-d9be-43ed-b390-1848772886c4\") " pod="openstack/ceilometer-0" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.774835 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b8399ed-d9be-43ed-b390-1848772886c4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9b8399ed-d9be-43ed-b390-1848772886c4\") " pod="openstack/ceilometer-0" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.775889 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b8399ed-d9be-43ed-b390-1848772886c4-config-data\") pod \"ceilometer-0\" (UID: \"9b8399ed-d9be-43ed-b390-1848772886c4\") " pod="openstack/ceilometer-0" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.776036 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b8399ed-d9be-43ed-b390-1848772886c4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9b8399ed-d9be-43ed-b390-1848772886c4\") " pod="openstack/ceilometer-0" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.776146 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b8399ed-d9be-43ed-b390-1848772886c4-log-httpd\") pod \"ceilometer-0\" (UID: \"9b8399ed-d9be-43ed-b390-1848772886c4\") " pod="openstack/ceilometer-0" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.776319 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b8399ed-d9be-43ed-b390-1848772886c4-scripts\") pod \"ceilometer-0\" (UID: \"9b8399ed-d9be-43ed-b390-1848772886c4\") " pod="openstack/ceilometer-0" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.775829 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b8399ed-d9be-43ed-b390-1848772886c4-run-httpd\") pod \"ceilometer-0\" (UID: \"9b8399ed-d9be-43ed-b390-1848772886c4\") " pod="openstack/ceilometer-0" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.778831 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b8399ed-d9be-43ed-b390-1848772886c4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9b8399ed-d9be-43ed-b390-1848772886c4\") " pod="openstack/ceilometer-0" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.778907 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b8399ed-d9be-43ed-b390-1848772886c4-log-httpd\") pod \"ceilometer-0\" (UID: \"9b8399ed-d9be-43ed-b390-1848772886c4\") " pod="openstack/ceilometer-0" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.779885 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kcww\" (UniqueName: \"kubernetes.io/projected/9b8399ed-d9be-43ed-b390-1848772886c4-kube-api-access-6kcww\") pod \"ceilometer-0\" (UID: \"9b8399ed-d9be-43ed-b390-1848772886c4\") " pod="openstack/ceilometer-0" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.781725 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b8399ed-d9be-43ed-b390-1848772886c4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9b8399ed-d9be-43ed-b390-1848772886c4\") " pod="openstack/ceilometer-0" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.784904 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b8399ed-d9be-43ed-b390-1848772886c4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9b8399ed-d9be-43ed-b390-1848772886c4\") " pod="openstack/ceilometer-0" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.788516 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b8399ed-d9be-43ed-b390-1848772886c4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9b8399ed-d9be-43ed-b390-1848772886c4\") " pod="openstack/ceilometer-0" Feb 19 10:28:37 crc kubenswrapper[4811]: W0219 10:28:37.790562 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9be9489_aa33_45fd_881f_7b8da8a5022c.slice/crio-570e58af575f6a79287c077b4cb8e8e4efec2f48ce78eb7b89a2258ef8080d50 WatchSource:0}: Error finding container 570e58af575f6a79287c077b4cb8e8e4efec2f48ce78eb7b89a2258ef8080d50: Status 404 returned error can't find the container with id 570e58af575f6a79287c077b4cb8e8e4efec2f48ce78eb7b89a2258ef8080d50 Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.790801 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b8399ed-d9be-43ed-b390-1848772886c4-scripts\") pod \"ceilometer-0\" (UID: \"9b8399ed-d9be-43ed-b390-1848772886c4\") " pod="openstack/ceilometer-0" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.795896 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.804558 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kcww\" (UniqueName: \"kubernetes.io/projected/9b8399ed-d9be-43ed-b390-1848772886c4-kube-api-access-6kcww\") pod \"ceilometer-0\" (UID: \"9b8399ed-d9be-43ed-b390-1848772886c4\") " pod="openstack/ceilometer-0" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.806282 4811 scope.go:117] "RemoveContainer" containerID="bb0dd2f210291ef068a155356f790adfeac0762f139e1e84d4bbec15669edc44" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.786904 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b8399ed-d9be-43ed-b390-1848772886c4-config-data\") pod \"ceilometer-0\" (UID: \"9b8399ed-d9be-43ed-b390-1848772886c4\") " pod="openstack/ceilometer-0" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.812524 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.822832 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.823057 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.847760 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.891908 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.893288 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.896288 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.898222 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.905569 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.920604 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-1d9e-account-create-update-rb5mq"] Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.930931 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-q6s4v"] Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.935195 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.949938 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-3102-account-create-update-qk57q"] Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.955173 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-tc999"] Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.964901 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-2552-account-create-update-7qdk6"] Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.992772 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f41cd68-0e1f-48df-a830-1c581d0cfe9e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"5f41cd68-0e1f-48df-a830-1c581d0cfe9e\") " pod="openstack/kube-state-metrics-0" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.992838 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bdgr\" (UniqueName: \"kubernetes.io/projected/5f41cd68-0e1f-48df-a830-1c581d0cfe9e-kube-api-access-5bdgr\") pod \"kube-state-metrics-0\" (UID: \"5f41cd68-0e1f-48df-a830-1c581d0cfe9e\") " pod="openstack/kube-state-metrics-0" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.992919 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/5f41cd68-0e1f-48df-a830-1c581d0cfe9e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"5f41cd68-0e1f-48df-a830-1c581d0cfe9e\") " pod="openstack/kube-state-metrics-0" Feb 19 10:28:37 crc kubenswrapper[4811]: I0219 10:28:37.993053 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f41cd68-0e1f-48df-a830-1c581d0cfe9e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"5f41cd68-0e1f-48df-a830-1c581d0cfe9e\") " pod="openstack/kube-state-metrics-0" Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.094563 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f41cd68-0e1f-48df-a830-1c581d0cfe9e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"5f41cd68-0e1f-48df-a830-1c581d0cfe9e\") " pod="openstack/kube-state-metrics-0" Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.094618 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bdgr\" (UniqueName: \"kubernetes.io/projected/5f41cd68-0e1f-48df-a830-1c581d0cfe9e-kube-api-access-5bdgr\") pod \"kube-state-metrics-0\" (UID: \"5f41cd68-0e1f-48df-a830-1c581d0cfe9e\") " pod="openstack/kube-state-metrics-0" Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.094704 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/5f41cd68-0e1f-48df-a830-1c581d0cfe9e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"5f41cd68-0e1f-48df-a830-1c581d0cfe9e\") " pod="openstack/kube-state-metrics-0" Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.094818 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f41cd68-0e1f-48df-a830-1c581d0cfe9e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"5f41cd68-0e1f-48df-a830-1c581d0cfe9e\") " pod="openstack/kube-state-metrics-0" Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.102519 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/5f41cd68-0e1f-48df-a830-1c581d0cfe9e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"5f41cd68-0e1f-48df-a830-1c581d0cfe9e\") " pod="openstack/kube-state-metrics-0" Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.128600 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f41cd68-0e1f-48df-a830-1c581d0cfe9e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"5f41cd68-0e1f-48df-a830-1c581d0cfe9e\") " pod="openstack/kube-state-metrics-0" Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.135239 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f41cd68-0e1f-48df-a830-1c581d0cfe9e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"5f41cd68-0e1f-48df-a830-1c581d0cfe9e\") " pod="openstack/kube-state-metrics-0" Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.144327 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bdgr\" (UniqueName: \"kubernetes.io/projected/5f41cd68-0e1f-48df-a830-1c581d0cfe9e-kube-api-access-5bdgr\") pod \"kube-state-metrics-0\" (UID: \"5f41cd68-0e1f-48df-a830-1c581d0cfe9e\") " pod="openstack/kube-state-metrics-0" Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.229478 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.501088 4811 generic.go:334] "Generic (PLEG): container finished" podID="3dfb180a-094f-4716-b7b8-a50cb4841cc9" containerID="5da12c13e5991e34463f734fc0ffe4ed5830c82ae0c3807e259991334539fb85" exitCode=0 Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.501262 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-kpczv" event={"ID":"3dfb180a-094f-4716-b7b8-a50cb4841cc9","Type":"ContainerDied","Data":"5da12c13e5991e34463f734fc0ffe4ed5830c82ae0c3807e259991334539fb85"} Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.537069 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-666bdb77b9-x6mrc" event={"ID":"0beeae9e-3b17-4691-8273-95fbc4be85fb","Type":"ContainerStarted","Data":"87ff75ca52b41a0eb5f49693d5ee795ecb6c12ef719347e80d44fe875f9502cb"} Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.537140 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-666bdb77b9-x6mrc" event={"ID":"0beeae9e-3b17-4691-8273-95fbc4be85fb","Type":"ContainerStarted","Data":"ab0cd3ded22f83245031dddd1cac8cc873b403daaf656175743390b6e3ee2185"} Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.551639 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3102-account-create-update-qk57q" event={"ID":"09d3635d-5cd3-4cbb-a214-5777c6b95a3e","Type":"ContainerStarted","Data":"e874dfa4da468f880dbca1f21938f82c133a761014acb6749d4982d60bb9eda0"} Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.573513 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-3102-account-create-update-qk57q" podStartSLOduration=11.573496944 podStartE2EDuration="11.573496944s" podCreationTimestamp="2026-02-19 10:28:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:28:38.572695483 +0000 UTC m=+1207.099160169" watchObservedRunningTime="2026-02-19 10:28:38.573496944 +0000 UTC m=+1207.099961630" Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.575334 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1d9e-account-create-update-rb5mq" event={"ID":"78de4cdd-5be9-44ae-b1b9-783bf1279fae","Type":"ContainerStarted","Data":"222c34a9f281cbf38cf5561dee182cde3355836c4ccd5fa30c0421993800ac08"} Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.593762 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-69cb98ff68-7rhp9" Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.593837 4811 generic.go:334] "Generic (PLEG): container finished" podID="fb83c4af-999b-46bc-89b8-f0909ccb3f14" containerID="0502e3b3a8d1675000ba21adcaca40bffc785f9c9f085592a799ff25edc21f9f" exitCode=0 Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.604765 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-1d9e-account-create-update-rb5mq" podStartSLOduration=10.604747302 podStartE2EDuration="10.604747302s" podCreationTimestamp="2026-02-19 10:28:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:28:38.594016778 +0000 UTC m=+1207.120481464" watchObservedRunningTime="2026-02-19 10:28:38.604747302 +0000 UTC m=+1207.131211978" Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.606219 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c301fbe-f93a-4b37-bc7b-3f226b08a6ba" path="/var/lib/kubelet/pods/6c301fbe-f93a-4b37-bc7b-3f226b08a6ba/volumes" Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.608672 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc7e3241-6036-4ef1-9bba-1621a28623a7" path="/var/lib/kubelet/pods/dc7e3241-6036-4ef1-9bba-1621a28623a7/volumes" Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.610032 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-69cb98ff68-7rhp9" event={"ID":"fb83c4af-999b-46bc-89b8-f0909ccb3f14","Type":"ContainerDied","Data":"0502e3b3a8d1675000ba21adcaca40bffc785f9c9f085592a799ff25edc21f9f"} Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.610068 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-q6s4v" event={"ID":"409ceda0-e9b0-4d91-8a2b-827d68b00483","Type":"ContainerStarted","Data":"e9c5263eabaa87c5628db3c0d4837bd44690baa07ddb3cad4244fca1d22d3c05"} Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.610111 4811 scope.go:117] "RemoveContainer" containerID="e4551f7eded09ebc9e05e93272a8f42cb6e80ec43d0de70c28e4b442b192d33e" Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.619933 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-tc999" event={"ID":"5aea467f-47c2-470e-a5db-2c3146a21658","Type":"ContainerStarted","Data":"208a50f9652781f7bd420ec2459c5adb012527ba942a68a550c43e8c03e7c44c"} Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.665379 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2552-account-create-update-7qdk6" event={"ID":"a9be9489-aa33-45fd-881f-7b8da8a5022c","Type":"ContainerStarted","Data":"570e58af575f6a79287c077b4cb8e8e4efec2f48ce78eb7b89a2258ef8080d50"} Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.666345 4811 scope.go:117] "RemoveContainer" containerID="0502e3b3a8d1675000ba21adcaca40bffc785f9c9f085592a799ff25edc21f9f" Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.668202 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-q6s4v" podStartSLOduration=10.668185563 podStartE2EDuration="10.668185563s" podCreationTimestamp="2026-02-19 10:28:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:28:38.653893798 +0000 UTC m=+1207.180358484" watchObservedRunningTime="2026-02-19 10:28:38.668185563 +0000 UTC m=+1207.194650249" Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.681851 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-tc999" podStartSLOduration=11.681829031 podStartE2EDuration="11.681829031s" podCreationTimestamp="2026-02-19 10:28:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:28:38.66962861 +0000 UTC m=+1207.196093296" watchObservedRunningTime="2026-02-19 10:28:38.681829031 +0000 UTC m=+1207.208293707" Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.701936 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-2552-account-create-update-7qdk6" podStartSLOduration=10.701918124 podStartE2EDuration="10.701918124s" podCreationTimestamp="2026-02-19 10:28:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:28:38.693437128 +0000 UTC m=+1207.219901814" watchObservedRunningTime="2026-02-19 10:28:38.701918124 +0000 UTC m=+1207.228382810" Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.711669 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb83c4af-999b-46bc-89b8-f0909ccb3f14-combined-ca-bundle\") pod \"fb83c4af-999b-46bc-89b8-f0909ccb3f14\" (UID: \"fb83c4af-999b-46bc-89b8-f0909ccb3f14\") " Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.711713 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb83c4af-999b-46bc-89b8-f0909ccb3f14-ovndb-tls-certs\") pod \"fb83c4af-999b-46bc-89b8-f0909ccb3f14\" (UID: \"fb83c4af-999b-46bc-89b8-f0909ccb3f14\") " Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.711738 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb83c4af-999b-46bc-89b8-f0909ccb3f14-config\") pod \"fb83c4af-999b-46bc-89b8-f0909ccb3f14\" (UID: \"fb83c4af-999b-46bc-89b8-f0909ccb3f14\") " Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.711786 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fb83c4af-999b-46bc-89b8-f0909ccb3f14-httpd-config\") pod \"fb83c4af-999b-46bc-89b8-f0909ccb3f14\" (UID: \"fb83c4af-999b-46bc-89b8-f0909ccb3f14\") " Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.711833 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzg25\" (UniqueName: \"kubernetes.io/projected/fb83c4af-999b-46bc-89b8-f0909ccb3f14-kube-api-access-gzg25\") pod \"fb83c4af-999b-46bc-89b8-f0909ccb3f14\" (UID: \"fb83c4af-999b-46bc-89b8-f0909ccb3f14\") " Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.721198 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb83c4af-999b-46bc-89b8-f0909ccb3f14-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "fb83c4af-999b-46bc-89b8-f0909ccb3f14" (UID: "fb83c4af-999b-46bc-89b8-f0909ccb3f14"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.738151 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb83c4af-999b-46bc-89b8-f0909ccb3f14-kube-api-access-gzg25" (OuterVolumeSpecName: "kube-api-access-gzg25") pod "fb83c4af-999b-46bc-89b8-f0909ccb3f14" (UID: "fb83c4af-999b-46bc-89b8-f0909ccb3f14"). InnerVolumeSpecName "kube-api-access-gzg25". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.799568 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb83c4af-999b-46bc-89b8-f0909ccb3f14-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb83c4af-999b-46bc-89b8-f0909ccb3f14" (UID: "fb83c4af-999b-46bc-89b8-f0909ccb3f14"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.822085 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb83c4af-999b-46bc-89b8-f0909ccb3f14-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.822119 4811 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fb83c4af-999b-46bc-89b8-f0909ccb3f14-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.822129 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzg25\" (UniqueName: \"kubernetes.io/projected/fb83c4af-999b-46bc-89b8-f0909ccb3f14-kube-api-access-gzg25\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.835565 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb83c4af-999b-46bc-89b8-f0909ccb3f14-config" (OuterVolumeSpecName: "config") pod "fb83c4af-999b-46bc-89b8-f0909ccb3f14" (UID: "fb83c4af-999b-46bc-89b8-f0909ccb3f14"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.881256 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb83c4af-999b-46bc-89b8-f0909ccb3f14-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "fb83c4af-999b-46bc-89b8-f0909ccb3f14" (UID: "fb83c4af-999b-46bc-89b8-f0909ccb3f14"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.893781 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.927136 4811 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb83c4af-999b-46bc-89b8-f0909ccb3f14-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:38 crc kubenswrapper[4811]: I0219 10:28:38.927188 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb83c4af-999b-46bc-89b8-f0909ccb3f14-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:39 crc kubenswrapper[4811]: I0219 10:28:39.037172 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 10:28:39 crc kubenswrapper[4811]: I0219 10:28:39.717035 4811 generic.go:334] "Generic (PLEG): container finished" podID="5aea467f-47c2-470e-a5db-2c3146a21658" containerID="93af5527cd4fc76efa832e4860f1a0123cc276a291e24bacbeedde19f3726189" exitCode=0 Feb 19 10:28:39 crc kubenswrapper[4811]: I0219 10:28:39.717426 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-tc999" event={"ID":"5aea467f-47c2-470e-a5db-2c3146a21658","Type":"ContainerDied","Data":"93af5527cd4fc76efa832e4860f1a0123cc276a291e24bacbeedde19f3726189"} Feb 19 10:28:39 crc kubenswrapper[4811]: I0219 10:28:39.740102 4811 generic.go:334] "Generic (PLEG): container finished" podID="09d3635d-5cd3-4cbb-a214-5777c6b95a3e" containerID="37fa49774aaccfdcd39825cdbed463868d77981cdaba6d6c402a16eecb98f0d6" exitCode=0 Feb 19 10:28:39 crc kubenswrapper[4811]: I0219 10:28:39.740179 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3102-account-create-update-qk57q" event={"ID":"09d3635d-5cd3-4cbb-a214-5777c6b95a3e","Type":"ContainerDied","Data":"37fa49774aaccfdcd39825cdbed463868d77981cdaba6d6c402a16eecb98f0d6"} Feb 19 10:28:39 crc kubenswrapper[4811]: I0219 10:28:39.769806 4811 generic.go:334] "Generic (PLEG): container finished" podID="78de4cdd-5be9-44ae-b1b9-783bf1279fae" containerID="ffe2e1c1ace860741ed77b73dd787f3888c036f7dfcd56a01f34b64e2677be14" exitCode=0 Feb 19 10:28:39 crc kubenswrapper[4811]: I0219 10:28:39.769978 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1d9e-account-create-update-rb5mq" event={"ID":"78de4cdd-5be9-44ae-b1b9-783bf1279fae","Type":"ContainerDied","Data":"ffe2e1c1ace860741ed77b73dd787f3888c036f7dfcd56a01f34b64e2677be14"} Feb 19 10:28:39 crc kubenswrapper[4811]: I0219 10:28:39.793475 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5f41cd68-0e1f-48df-a830-1c581d0cfe9e","Type":"ContainerStarted","Data":"609df136d513888a51051f464b1bca77b606a7a73f27b7aefd1ab33beeb25d5e"} Feb 19 10:28:39 crc kubenswrapper[4811]: I0219 10:28:39.803961 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-69cb98ff68-7rhp9" event={"ID":"fb83c4af-999b-46bc-89b8-f0909ccb3f14","Type":"ContainerDied","Data":"71f292254258bfc16cf264c93d0014d0e3e157465a849f4efa69bb43cef05f40"} Feb 19 10:28:39 crc kubenswrapper[4811]: I0219 10:28:39.804093 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-69cb98ff68-7rhp9" Feb 19 10:28:39 crc kubenswrapper[4811]: I0219 10:28:39.817183 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-666bdb77b9-x6mrc" event={"ID":"0beeae9e-3b17-4691-8273-95fbc4be85fb","Type":"ContainerStarted","Data":"6135d54778bab88fb7ff1552a42bfb77c283b42d895652e09bb5949174af669f"} Feb 19 10:28:39 crc kubenswrapper[4811]: I0219 10:28:39.817687 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-666bdb77b9-x6mrc" Feb 19 10:28:39 crc kubenswrapper[4811]: I0219 10:28:39.817757 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-666bdb77b9-x6mrc" Feb 19 10:28:39 crc kubenswrapper[4811]: I0219 10:28:39.852045 4811 generic.go:334] "Generic (PLEG): container finished" podID="409ceda0-e9b0-4d91-8a2b-827d68b00483" containerID="a9bd8448d14e1f048becd22b643686d9e8f43c2482a0e5120d351d5a1da45299" exitCode=0 Feb 19 10:28:39 crc kubenswrapper[4811]: I0219 10:28:39.852264 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-q6s4v" event={"ID":"409ceda0-e9b0-4d91-8a2b-827d68b00483","Type":"ContainerDied","Data":"a9bd8448d14e1f048becd22b643686d9e8f43c2482a0e5120d351d5a1da45299"} Feb 19 10:28:39 crc kubenswrapper[4811]: I0219 10:28:39.856323 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b8399ed-d9be-43ed-b390-1848772886c4","Type":"ContainerStarted","Data":"a5919cb3be109761f79cf1c00ba234685d2f8004eb044e423977a0c82925915d"} Feb 19 10:28:39 crc kubenswrapper[4811]: I0219 10:28:39.865461 4811 generic.go:334] "Generic (PLEG): container finished" podID="a9be9489-aa33-45fd-881f-7b8da8a5022c" containerID="faa277203517eba149841671b72a8f55ca8e597f695e214ea0ea08f863819e97" exitCode=0 Feb 19 10:28:39 crc kubenswrapper[4811]: I0219 10:28:39.865781 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2552-account-create-update-7qdk6" event={"ID":"a9be9489-aa33-45fd-881f-7b8da8a5022c","Type":"ContainerDied","Data":"faa277203517eba149841671b72a8f55ca8e597f695e214ea0ea08f863819e97"} Feb 19 10:28:39 crc kubenswrapper[4811]: I0219 10:28:39.876617 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-666bdb77b9-x6mrc" podStartSLOduration=12.876592301 podStartE2EDuration="12.876592301s" podCreationTimestamp="2026-02-19 10:28:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:28:39.867859438 +0000 UTC m=+1208.394324124" watchObservedRunningTime="2026-02-19 10:28:39.876592301 +0000 UTC m=+1208.403056987" Feb 19 10:28:39 crc kubenswrapper[4811]: I0219 10:28:39.966525 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-69cb98ff68-7rhp9"] Feb 19 10:28:39 crc kubenswrapper[4811]: I0219 10:28:39.983432 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-69cb98ff68-7rhp9"] Feb 19 10:28:40 crc kubenswrapper[4811]: I0219 10:28:40.429925 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-kpczv" Feb 19 10:28:40 crc kubenswrapper[4811]: I0219 10:28:40.600838 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dfb180a-094f-4716-b7b8-a50cb4841cc9-operator-scripts\") pod \"3dfb180a-094f-4716-b7b8-a50cb4841cc9\" (UID: \"3dfb180a-094f-4716-b7b8-a50cb4841cc9\") " Feb 19 10:28:40 crc kubenswrapper[4811]: I0219 10:28:40.600936 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67ljk\" (UniqueName: \"kubernetes.io/projected/3dfb180a-094f-4716-b7b8-a50cb4841cc9-kube-api-access-67ljk\") pod \"3dfb180a-094f-4716-b7b8-a50cb4841cc9\" (UID: \"3dfb180a-094f-4716-b7b8-a50cb4841cc9\") " Feb 19 10:28:40 crc kubenswrapper[4811]: I0219 10:28:40.602199 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3dfb180a-094f-4716-b7b8-a50cb4841cc9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3dfb180a-094f-4716-b7b8-a50cb4841cc9" (UID: "3dfb180a-094f-4716-b7b8-a50cb4841cc9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:28:40 crc kubenswrapper[4811]: I0219 10:28:40.606694 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb83c4af-999b-46bc-89b8-f0909ccb3f14" path="/var/lib/kubelet/pods/fb83c4af-999b-46bc-89b8-f0909ccb3f14/volumes" Feb 19 10:28:40 crc kubenswrapper[4811]: I0219 10:28:40.614559 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dfb180a-094f-4716-b7b8-a50cb4841cc9-kube-api-access-67ljk" (OuterVolumeSpecName: "kube-api-access-67ljk") pod "3dfb180a-094f-4716-b7b8-a50cb4841cc9" (UID: "3dfb180a-094f-4716-b7b8-a50cb4841cc9"). InnerVolumeSpecName "kube-api-access-67ljk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:28:40 crc kubenswrapper[4811]: I0219 10:28:40.702959 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dfb180a-094f-4716-b7b8-a50cb4841cc9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:40 crc kubenswrapper[4811]: I0219 10:28:40.702997 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67ljk\" (UniqueName: \"kubernetes.io/projected/3dfb180a-094f-4716-b7b8-a50cb4841cc9-kube-api-access-67ljk\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:40 crc kubenswrapper[4811]: I0219 10:28:40.877809 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"5f41cd68-0e1f-48df-a830-1c581d0cfe9e","Type":"ContainerStarted","Data":"da465f7b312d09b49469496005b66dd1052d0a08b1cdedc7dd84463c95f529cb"} Feb 19 10:28:40 crc kubenswrapper[4811]: I0219 10:28:40.877944 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 19 10:28:40 crc kubenswrapper[4811]: I0219 10:28:40.880233 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-kpczv" event={"ID":"3dfb180a-094f-4716-b7b8-a50cb4841cc9","Type":"ContainerDied","Data":"b7362f8b33a43ad4ffd23f6f0a911033108642f91e97e576026e22454b6a91b7"} Feb 19 10:28:40 crc kubenswrapper[4811]: I0219 10:28:40.880284 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7362f8b33a43ad4ffd23f6f0a911033108642f91e97e576026e22454b6a91b7" Feb 19 10:28:40 crc kubenswrapper[4811]: I0219 10:28:40.880330 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-kpczv" Feb 19 10:28:40 crc kubenswrapper[4811]: I0219 10:28:40.887409 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b8399ed-d9be-43ed-b390-1848772886c4","Type":"ContainerStarted","Data":"cbc61300c164e874e274e655761b10cad62b657817845c4096024dc305776509"} Feb 19 10:28:40 crc kubenswrapper[4811]: I0219 10:28:40.910123 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.427807711 podStartE2EDuration="3.910104822s" podCreationTimestamp="2026-02-19 10:28:37 +0000 UTC" firstStartedPulling="2026-02-19 10:28:39.041435726 +0000 UTC m=+1207.567900412" lastFinishedPulling="2026-02-19 10:28:39.523732837 +0000 UTC m=+1208.050197523" observedRunningTime="2026-02-19 10:28:40.902928678 +0000 UTC m=+1209.429393364" watchObservedRunningTime="2026-02-19 10:28:40.910104822 +0000 UTC m=+1209.436569498" Feb 19 10:28:41 crc kubenswrapper[4811]: I0219 10:28:41.376768 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1d9e-account-create-update-rb5mq" Feb 19 10:28:41 crc kubenswrapper[4811]: I0219 10:28:41.523278 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pz2wc\" (UniqueName: \"kubernetes.io/projected/78de4cdd-5be9-44ae-b1b9-783bf1279fae-kube-api-access-pz2wc\") pod \"78de4cdd-5be9-44ae-b1b9-783bf1279fae\" (UID: \"78de4cdd-5be9-44ae-b1b9-783bf1279fae\") " Feb 19 10:28:41 crc kubenswrapper[4811]: I0219 10:28:41.523518 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78de4cdd-5be9-44ae-b1b9-783bf1279fae-operator-scripts\") pod \"78de4cdd-5be9-44ae-b1b9-783bf1279fae\" (UID: \"78de4cdd-5be9-44ae-b1b9-783bf1279fae\") " Feb 19 10:28:41 crc kubenswrapper[4811]: I0219 10:28:41.524424 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78de4cdd-5be9-44ae-b1b9-783bf1279fae-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "78de4cdd-5be9-44ae-b1b9-783bf1279fae" (UID: "78de4cdd-5be9-44ae-b1b9-783bf1279fae"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:28:41 crc kubenswrapper[4811]: I0219 10:28:41.545365 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78de4cdd-5be9-44ae-b1b9-783bf1279fae-kube-api-access-pz2wc" (OuterVolumeSpecName: "kube-api-access-pz2wc") pod "78de4cdd-5be9-44ae-b1b9-783bf1279fae" (UID: "78de4cdd-5be9-44ae-b1b9-783bf1279fae"). InnerVolumeSpecName "kube-api-access-pz2wc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:28:41 crc kubenswrapper[4811]: I0219 10:28:41.626007 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78de4cdd-5be9-44ae-b1b9-783bf1279fae-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:41 crc kubenswrapper[4811]: I0219 10:28:41.626042 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pz2wc\" (UniqueName: \"kubernetes.io/projected/78de4cdd-5be9-44ae-b1b9-783bf1279fae-kube-api-access-pz2wc\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:41 crc kubenswrapper[4811]: I0219 10:28:41.900982 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1d9e-account-create-update-rb5mq" Feb 19 10:28:41 crc kubenswrapper[4811]: I0219 10:28:41.900967 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1d9e-account-create-update-rb5mq" event={"ID":"78de4cdd-5be9-44ae-b1b9-783bf1279fae","Type":"ContainerDied","Data":"222c34a9f281cbf38cf5561dee182cde3355836c4ccd5fa30c0421993800ac08"} Feb 19 10:28:41 crc kubenswrapper[4811]: I0219 10:28:41.901929 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="222c34a9f281cbf38cf5561dee182cde3355836c4ccd5fa30c0421993800ac08" Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.430284 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3102-account-create-update-qk57q" Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.436267 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2552-account-create-update-7qdk6" Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.443629 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-q6s4v" Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.450795 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-tc999" Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.545269 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcr2k\" (UniqueName: \"kubernetes.io/projected/a9be9489-aa33-45fd-881f-7b8da8a5022c-kube-api-access-pcr2k\") pod \"a9be9489-aa33-45fd-881f-7b8da8a5022c\" (UID: \"a9be9489-aa33-45fd-881f-7b8da8a5022c\") " Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.545395 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpn46\" (UniqueName: \"kubernetes.io/projected/09d3635d-5cd3-4cbb-a214-5777c6b95a3e-kube-api-access-zpn46\") pod \"09d3635d-5cd3-4cbb-a214-5777c6b95a3e\" (UID: \"09d3635d-5cd3-4cbb-a214-5777c6b95a3e\") " Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.545441 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09d3635d-5cd3-4cbb-a214-5777c6b95a3e-operator-scripts\") pod \"09d3635d-5cd3-4cbb-a214-5777c6b95a3e\" (UID: \"09d3635d-5cd3-4cbb-a214-5777c6b95a3e\") " Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.545503 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9be9489-aa33-45fd-881f-7b8da8a5022c-operator-scripts\") pod \"a9be9489-aa33-45fd-881f-7b8da8a5022c\" (UID: \"a9be9489-aa33-45fd-881f-7b8da8a5022c\") " Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.546778 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9be9489-aa33-45fd-881f-7b8da8a5022c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a9be9489-aa33-45fd-881f-7b8da8a5022c" (UID: "a9be9489-aa33-45fd-881f-7b8da8a5022c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.547189 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09d3635d-5cd3-4cbb-a214-5777c6b95a3e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "09d3635d-5cd3-4cbb-a214-5777c6b95a3e" (UID: "09d3635d-5cd3-4cbb-a214-5777c6b95a3e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.554675 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9be9489-aa33-45fd-881f-7b8da8a5022c-kube-api-access-pcr2k" (OuterVolumeSpecName: "kube-api-access-pcr2k") pod "a9be9489-aa33-45fd-881f-7b8da8a5022c" (UID: "a9be9489-aa33-45fd-881f-7b8da8a5022c"). InnerVolumeSpecName "kube-api-access-pcr2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.555124 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09d3635d-5cd3-4cbb-a214-5777c6b95a3e-kube-api-access-zpn46" (OuterVolumeSpecName: "kube-api-access-zpn46") pod "09d3635d-5cd3-4cbb-a214-5777c6b95a3e" (UID: "09d3635d-5cd3-4cbb-a214-5777c6b95a3e"). InnerVolumeSpecName "kube-api-access-zpn46". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.646792 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/409ceda0-e9b0-4d91-8a2b-827d68b00483-operator-scripts\") pod \"409ceda0-e9b0-4d91-8a2b-827d68b00483\" (UID: \"409ceda0-e9b0-4d91-8a2b-827d68b00483\") " Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.647203 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5aea467f-47c2-470e-a5db-2c3146a21658-operator-scripts\") pod \"5aea467f-47c2-470e-a5db-2c3146a21658\" (UID: \"5aea467f-47c2-470e-a5db-2c3146a21658\") " Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.647310 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/409ceda0-e9b0-4d91-8a2b-827d68b00483-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "409ceda0-e9b0-4d91-8a2b-827d68b00483" (UID: "409ceda0-e9b0-4d91-8a2b-827d68b00483"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.647292 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7hr2\" (UniqueName: \"kubernetes.io/projected/409ceda0-e9b0-4d91-8a2b-827d68b00483-kube-api-access-l7hr2\") pod \"409ceda0-e9b0-4d91-8a2b-827d68b00483\" (UID: \"409ceda0-e9b0-4d91-8a2b-827d68b00483\") " Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.647548 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc86j\" (UniqueName: \"kubernetes.io/projected/5aea467f-47c2-470e-a5db-2c3146a21658-kube-api-access-zc86j\") pod \"5aea467f-47c2-470e-a5db-2c3146a21658\" (UID: \"5aea467f-47c2-470e-a5db-2c3146a21658\") " Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.647608 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5aea467f-47c2-470e-a5db-2c3146a21658-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5aea467f-47c2-470e-a5db-2c3146a21658" (UID: "5aea467f-47c2-470e-a5db-2c3146a21658"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.648359 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcr2k\" (UniqueName: \"kubernetes.io/projected/a9be9489-aa33-45fd-881f-7b8da8a5022c-kube-api-access-pcr2k\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.648389 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5aea467f-47c2-470e-a5db-2c3146a21658-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.648411 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpn46\" (UniqueName: \"kubernetes.io/projected/09d3635d-5cd3-4cbb-a214-5777c6b95a3e-kube-api-access-zpn46\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.648509 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09d3635d-5cd3-4cbb-a214-5777c6b95a3e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.648776 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9be9489-aa33-45fd-881f-7b8da8a5022c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.648792 4811 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/409ceda0-e9b0-4d91-8a2b-827d68b00483-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.657299 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5aea467f-47c2-470e-a5db-2c3146a21658-kube-api-access-zc86j" (OuterVolumeSpecName: "kube-api-access-zc86j") pod "5aea467f-47c2-470e-a5db-2c3146a21658" (UID: "5aea467f-47c2-470e-a5db-2c3146a21658"). InnerVolumeSpecName "kube-api-access-zc86j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.663280 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/409ceda0-e9b0-4d91-8a2b-827d68b00483-kube-api-access-l7hr2" (OuterVolumeSpecName: "kube-api-access-l7hr2") pod "409ceda0-e9b0-4d91-8a2b-827d68b00483" (UID: "409ceda0-e9b0-4d91-8a2b-827d68b00483"). InnerVolumeSpecName "kube-api-access-l7hr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.716757 4811 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod918b0763-de1c-49fa-8ade-2410fb2e1af3"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod918b0763-de1c-49fa-8ade-2410fb2e1af3] : Timed out while waiting for systemd to remove kubepods-besteffort-pod918b0763_de1c_49fa_8ade_2410fb2e1af3.slice" Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.750337 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7hr2\" (UniqueName: \"kubernetes.io/projected/409ceda0-e9b0-4d91-8a2b-827d68b00483-kube-api-access-l7hr2\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.750373 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zc86j\" (UniqueName: \"kubernetes.io/projected/5aea467f-47c2-470e-a5db-2c3146a21658-kube-api-access-zc86j\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.913395 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3102-account-create-update-qk57q" event={"ID":"09d3635d-5cd3-4cbb-a214-5777c6b95a3e","Type":"ContainerDied","Data":"e874dfa4da468f880dbca1f21938f82c133a761014acb6749d4982d60bb9eda0"} Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.915429 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e874dfa4da468f880dbca1f21938f82c133a761014acb6749d4982d60bb9eda0" Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.915556 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-q6s4v" event={"ID":"409ceda0-e9b0-4d91-8a2b-827d68b00483","Type":"ContainerDied","Data":"e9c5263eabaa87c5628db3c0d4837bd44690baa07ddb3cad4244fca1d22d3c05"} Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.915643 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9c5263eabaa87c5628db3c0d4837bd44690baa07ddb3cad4244fca1d22d3c05" Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.915231 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-q6s4v" Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.913411 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3102-account-create-update-qk57q" Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.923353 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b8399ed-d9be-43ed-b390-1848772886c4","Type":"ContainerStarted","Data":"ebacdea3e497af67cbff14f144c2a2137e722e99f432ecba4dbcf23b70e8423b"} Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.938861 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-tc999" Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.938865 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-tc999" event={"ID":"5aea467f-47c2-470e-a5db-2c3146a21658","Type":"ContainerDied","Data":"208a50f9652781f7bd420ec2459c5adb012527ba942a68a550c43e8c03e7c44c"} Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.943570 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="208a50f9652781f7bd420ec2459c5adb012527ba942a68a550c43e8c03e7c44c" Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.959233 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2552-account-create-update-7qdk6" event={"ID":"a9be9489-aa33-45fd-881f-7b8da8a5022c","Type":"ContainerDied","Data":"570e58af575f6a79287c077b4cb8e8e4efec2f48ce78eb7b89a2258ef8080d50"} Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.959282 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="570e58af575f6a79287c077b4cb8e8e4efec2f48ce78eb7b89a2258ef8080d50" Feb 19 10:28:42 crc kubenswrapper[4811]: I0219 10:28:42.959346 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2552-account-create-update-7qdk6" Feb 19 10:28:43 crc kubenswrapper[4811]: I0219 10:28:43.133761 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-666bdb77b9-x6mrc" Feb 19 10:28:43 crc kubenswrapper[4811]: I0219 10:28:43.153001 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-666bdb77b9-x6mrc" Feb 19 10:28:43 crc kubenswrapper[4811]: I0219 10:28:43.973563 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b8399ed-d9be-43ed-b390-1848772886c4","Type":"ContainerStarted","Data":"9a894e26346c07705d4812963e8e5a722198c93fa5a9a993f6af35a1d92e8d9c"} Feb 19 10:28:44 crc kubenswrapper[4811]: I0219 10:28:44.478276 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6df884f584-x4z2k" Feb 19 10:28:44 crc kubenswrapper[4811]: I0219 10:28:44.480748 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6df884f584-x4z2k" Feb 19 10:28:44 crc kubenswrapper[4811]: I0219 10:28:44.641155 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-68f45d64cb-4stgq"] Feb 19 10:28:44 crc kubenswrapper[4811]: I0219 10:28:44.641914 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-68f45d64cb-4stgq" podUID="1a92ef88-05db-43fe-a0bb-1d188afbae59" containerName="placement-log" containerID="cri-o://e6fe3a3141d314270a76b0a597b39b2e42ebda56acc5631b73c9ce583e3d92d5" gracePeriod=30 Feb 19 10:28:44 crc kubenswrapper[4811]: I0219 10:28:44.642085 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-68f45d64cb-4stgq" podUID="1a92ef88-05db-43fe-a0bb-1d188afbae59" containerName="placement-api" containerID="cri-o://d8cb62fd80072f2cbc83755c9a1d2433b446b094759e27ffd2aef4f9f647738a" gracePeriod=30 Feb 19 10:28:44 crc kubenswrapper[4811]: I0219 10:28:44.998841 4811 generic.go:334] "Generic (PLEG): container finished" podID="1a92ef88-05db-43fe-a0bb-1d188afbae59" containerID="e6fe3a3141d314270a76b0a597b39b2e42ebda56acc5631b73c9ce583e3d92d5" exitCode=143 Feb 19 10:28:45 crc kubenswrapper[4811]: I0219 10:28:44.999844 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68f45d64cb-4stgq" event={"ID":"1a92ef88-05db-43fe-a0bb-1d188afbae59","Type":"ContainerDied","Data":"e6fe3a3141d314270a76b0a597b39b2e42ebda56acc5631b73c9ce583e3d92d5"} Feb 19 10:28:45 crc kubenswrapper[4811]: I0219 10:28:45.408612 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-766dfcbd56-lqbrt" Feb 19 10:28:45 crc kubenswrapper[4811]: I0219 10:28:45.411081 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-766dfcbd56-lqbrt" Feb 19 10:28:45 crc kubenswrapper[4811]: I0219 10:28:45.575652 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-676dcb6496-tnhmt" Feb 19 10:28:45 crc kubenswrapper[4811]: I0219 10:28:45.576049 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-676dcb6496-tnhmt" Feb 19 10:28:46 crc kubenswrapper[4811]: I0219 10:28:46.010975 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b8399ed-d9be-43ed-b390-1848772886c4","Type":"ContainerStarted","Data":"13dc6272b24e4ab04f47b68b9d225cec1e92124622717bc8eacc5a853f59d7fc"} Feb 19 10:28:46 crc kubenswrapper[4811]: I0219 10:28:46.038729 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.01906045 podStartE2EDuration="9.038705961s" podCreationTimestamp="2026-02-19 10:28:37 +0000 UTC" firstStartedPulling="2026-02-19 10:28:38.904729155 +0000 UTC m=+1207.431193841" lastFinishedPulling="2026-02-19 10:28:44.924374666 +0000 UTC m=+1213.450839352" observedRunningTime="2026-02-19 10:28:46.037263584 +0000 UTC m=+1214.563728270" watchObservedRunningTime="2026-02-19 10:28:46.038705961 +0000 UTC m=+1214.565170647" Feb 19 10:28:47 crc kubenswrapper[4811]: I0219 10:28:47.020729 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 10:28:47 crc kubenswrapper[4811]: I0219 10:28:47.935605 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.097598 4811 generic.go:334] "Generic (PLEG): container finished" podID="1a92ef88-05db-43fe-a0bb-1d188afbae59" containerID="d8cb62fd80072f2cbc83755c9a1d2433b446b094759e27ffd2aef4f9f647738a" exitCode=0 Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.099375 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68f45d64cb-4stgq" event={"ID":"1a92ef88-05db-43fe-a0bb-1d188afbae59","Type":"ContainerDied","Data":"d8cb62fd80072f2cbc83755c9a1d2433b446b094759e27ffd2aef4f9f647738a"} Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.263205 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.353948 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-68f45d64cb-4stgq" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.500608 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a92ef88-05db-43fe-a0bb-1d188afbae59-scripts\") pod \"1a92ef88-05db-43fe-a0bb-1d188afbae59\" (UID: \"1a92ef88-05db-43fe-a0bb-1d188afbae59\") " Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.500763 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a92ef88-05db-43fe-a0bb-1d188afbae59-public-tls-certs\") pod \"1a92ef88-05db-43fe-a0bb-1d188afbae59\" (UID: \"1a92ef88-05db-43fe-a0bb-1d188afbae59\") " Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.500941 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a92ef88-05db-43fe-a0bb-1d188afbae59-internal-tls-certs\") pod \"1a92ef88-05db-43fe-a0bb-1d188afbae59\" (UID: \"1a92ef88-05db-43fe-a0bb-1d188afbae59\") " Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.500974 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a92ef88-05db-43fe-a0bb-1d188afbae59-config-data\") pod \"1a92ef88-05db-43fe-a0bb-1d188afbae59\" (UID: \"1a92ef88-05db-43fe-a0bb-1d188afbae59\") " Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.501048 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a92ef88-05db-43fe-a0bb-1d188afbae59-combined-ca-bundle\") pod \"1a92ef88-05db-43fe-a0bb-1d188afbae59\" (UID: \"1a92ef88-05db-43fe-a0bb-1d188afbae59\") " Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.501135 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6b8f\" (UniqueName: \"kubernetes.io/projected/1a92ef88-05db-43fe-a0bb-1d188afbae59-kube-api-access-p6b8f\") pod \"1a92ef88-05db-43fe-a0bb-1d188afbae59\" (UID: \"1a92ef88-05db-43fe-a0bb-1d188afbae59\") " Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.501174 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a92ef88-05db-43fe-a0bb-1d188afbae59-logs\") pod \"1a92ef88-05db-43fe-a0bb-1d188afbae59\" (UID: \"1a92ef88-05db-43fe-a0bb-1d188afbae59\") " Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.501798 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a92ef88-05db-43fe-a0bb-1d188afbae59-logs" (OuterVolumeSpecName: "logs") pod "1a92ef88-05db-43fe-a0bb-1d188afbae59" (UID: "1a92ef88-05db-43fe-a0bb-1d188afbae59"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.512072 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a92ef88-05db-43fe-a0bb-1d188afbae59-scripts" (OuterVolumeSpecName: "scripts") pod "1a92ef88-05db-43fe-a0bb-1d188afbae59" (UID: "1a92ef88-05db-43fe-a0bb-1d188afbae59"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.512295 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a92ef88-05db-43fe-a0bb-1d188afbae59-kube-api-access-p6b8f" (OuterVolumeSpecName: "kube-api-access-p6b8f") pod "1a92ef88-05db-43fe-a0bb-1d188afbae59" (UID: "1a92ef88-05db-43fe-a0bb-1d188afbae59"). InnerVolumeSpecName "kube-api-access-p6b8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.622367 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6b8f\" (UniqueName: \"kubernetes.io/projected/1a92ef88-05db-43fe-a0bb-1d188afbae59-kube-api-access-p6b8f\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.622428 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a92ef88-05db-43fe-a0bb-1d188afbae59-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.622468 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a92ef88-05db-43fe-a0bb-1d188afbae59-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.625646 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a92ef88-05db-43fe-a0bb-1d188afbae59-config-data" (OuterVolumeSpecName: "config-data") pod "1a92ef88-05db-43fe-a0bb-1d188afbae59" (UID: "1a92ef88-05db-43fe-a0bb-1d188afbae59"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.630683 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a92ef88-05db-43fe-a0bb-1d188afbae59-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1a92ef88-05db-43fe-a0bb-1d188afbae59" (UID: "1a92ef88-05db-43fe-a0bb-1d188afbae59"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.661632 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a92ef88-05db-43fe-a0bb-1d188afbae59-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1a92ef88-05db-43fe-a0bb-1d188afbae59" (UID: "1a92ef88-05db-43fe-a0bb-1d188afbae59"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.685939 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-j6nxh"] Feb 19 10:28:48 crc kubenswrapper[4811]: E0219 10:28:48.686579 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="409ceda0-e9b0-4d91-8a2b-827d68b00483" containerName="mariadb-database-create" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.686604 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="409ceda0-e9b0-4d91-8a2b-827d68b00483" containerName="mariadb-database-create" Feb 19 10:28:48 crc kubenswrapper[4811]: E0219 10:28:48.686622 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78de4cdd-5be9-44ae-b1b9-783bf1279fae" containerName="mariadb-account-create-update" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.686629 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="78de4cdd-5be9-44ae-b1b9-783bf1279fae" containerName="mariadb-account-create-update" Feb 19 10:28:48 crc kubenswrapper[4811]: E0219 10:28:48.686646 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb83c4af-999b-46bc-89b8-f0909ccb3f14" containerName="neutron-httpd" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.686653 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb83c4af-999b-46bc-89b8-f0909ccb3f14" containerName="neutron-httpd" Feb 19 10:28:48 crc kubenswrapper[4811]: E0219 10:28:48.686664 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dfb180a-094f-4716-b7b8-a50cb4841cc9" containerName="mariadb-database-create" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.686670 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dfb180a-094f-4716-b7b8-a50cb4841cc9" containerName="mariadb-database-create" Feb 19 10:28:48 crc kubenswrapper[4811]: E0219 10:28:48.686694 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9be9489-aa33-45fd-881f-7b8da8a5022c" containerName="mariadb-account-create-update" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.686702 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9be9489-aa33-45fd-881f-7b8da8a5022c" containerName="mariadb-account-create-update" Feb 19 10:28:48 crc kubenswrapper[4811]: E0219 10:28:48.686718 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a92ef88-05db-43fe-a0bb-1d188afbae59" containerName="placement-log" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.686726 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a92ef88-05db-43fe-a0bb-1d188afbae59" containerName="placement-log" Feb 19 10:28:48 crc kubenswrapper[4811]: E0219 10:28:48.686733 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb83c4af-999b-46bc-89b8-f0909ccb3f14" containerName="neutron-api" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.686740 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb83c4af-999b-46bc-89b8-f0909ccb3f14" containerName="neutron-api" Feb 19 10:28:48 crc kubenswrapper[4811]: E0219 10:28:48.686757 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09d3635d-5cd3-4cbb-a214-5777c6b95a3e" containerName="mariadb-account-create-update" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.686765 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="09d3635d-5cd3-4cbb-a214-5777c6b95a3e" containerName="mariadb-account-create-update" Feb 19 10:28:48 crc kubenswrapper[4811]: E0219 10:28:48.686775 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aea467f-47c2-470e-a5db-2c3146a21658" containerName="mariadb-database-create" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.686781 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aea467f-47c2-470e-a5db-2c3146a21658" containerName="mariadb-database-create" Feb 19 10:28:48 crc kubenswrapper[4811]: E0219 10:28:48.686793 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a92ef88-05db-43fe-a0bb-1d188afbae59" containerName="placement-api" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.686799 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a92ef88-05db-43fe-a0bb-1d188afbae59" containerName="placement-api" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.687004 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb83c4af-999b-46bc-89b8-f0909ccb3f14" containerName="neutron-httpd" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.687023 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a92ef88-05db-43fe-a0bb-1d188afbae59" containerName="placement-log" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.687035 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="78de4cdd-5be9-44ae-b1b9-783bf1279fae" containerName="mariadb-account-create-update" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.687048 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="409ceda0-e9b0-4d91-8a2b-827d68b00483" containerName="mariadb-database-create" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.687060 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb83c4af-999b-46bc-89b8-f0909ccb3f14" containerName="neutron-api" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.687068 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a92ef88-05db-43fe-a0bb-1d188afbae59" containerName="placement-api" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.687078 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="5aea467f-47c2-470e-a5db-2c3146a21658" containerName="mariadb-database-create" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.687090 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dfb180a-094f-4716-b7b8-a50cb4841cc9" containerName="mariadb-database-create" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.687127 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="09d3635d-5cd3-4cbb-a214-5777c6b95a3e" containerName="mariadb-account-create-update" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.687137 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9be9489-aa33-45fd-881f-7b8da8a5022c" containerName="mariadb-account-create-update" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.688014 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-j6nxh" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.692915 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-f2rvf" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.693142 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.693273 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.708628 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-j6nxh"] Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.710189 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a92ef88-05db-43fe-a0bb-1d188afbae59-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1a92ef88-05db-43fe-a0bb-1d188afbae59" (UID: "1a92ef88-05db-43fe-a0bb-1d188afbae59"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.724872 4811 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a92ef88-05db-43fe-a0bb-1d188afbae59-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.724906 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a92ef88-05db-43fe-a0bb-1d188afbae59-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.724917 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a92ef88-05db-43fe-a0bb-1d188afbae59-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.724926 4811 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a92ef88-05db-43fe-a0bb-1d188afbae59-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.826830 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dcqg\" (UniqueName: \"kubernetes.io/projected/28527c23-663e-467f-b7df-90e96a677700-kube-api-access-8dcqg\") pod \"nova-cell0-conductor-db-sync-j6nxh\" (UID: \"28527c23-663e-467f-b7df-90e96a677700\") " pod="openstack/nova-cell0-conductor-db-sync-j6nxh" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.826952 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28527c23-663e-467f-b7df-90e96a677700-scripts\") pod \"nova-cell0-conductor-db-sync-j6nxh\" (UID: \"28527c23-663e-467f-b7df-90e96a677700\") " pod="openstack/nova-cell0-conductor-db-sync-j6nxh" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.827028 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28527c23-663e-467f-b7df-90e96a677700-config-data\") pod \"nova-cell0-conductor-db-sync-j6nxh\" (UID: \"28527c23-663e-467f-b7df-90e96a677700\") " pod="openstack/nova-cell0-conductor-db-sync-j6nxh" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.827107 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28527c23-663e-467f-b7df-90e96a677700-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-j6nxh\" (UID: \"28527c23-663e-467f-b7df-90e96a677700\") " pod="openstack/nova-cell0-conductor-db-sync-j6nxh" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.928760 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28527c23-663e-467f-b7df-90e96a677700-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-j6nxh\" (UID: \"28527c23-663e-467f-b7df-90e96a677700\") " pod="openstack/nova-cell0-conductor-db-sync-j6nxh" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.928863 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcqg\" (UniqueName: \"kubernetes.io/projected/28527c23-663e-467f-b7df-90e96a677700-kube-api-access-8dcqg\") pod \"nova-cell0-conductor-db-sync-j6nxh\" (UID: \"28527c23-663e-467f-b7df-90e96a677700\") " pod="openstack/nova-cell0-conductor-db-sync-j6nxh" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.928925 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28527c23-663e-467f-b7df-90e96a677700-scripts\") pod \"nova-cell0-conductor-db-sync-j6nxh\" (UID: \"28527c23-663e-467f-b7df-90e96a677700\") " pod="openstack/nova-cell0-conductor-db-sync-j6nxh" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.928959 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28527c23-663e-467f-b7df-90e96a677700-config-data\") pod \"nova-cell0-conductor-db-sync-j6nxh\" (UID: \"28527c23-663e-467f-b7df-90e96a677700\") " pod="openstack/nova-cell0-conductor-db-sync-j6nxh" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.938033 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28527c23-663e-467f-b7df-90e96a677700-config-data\") pod \"nova-cell0-conductor-db-sync-j6nxh\" (UID: \"28527c23-663e-467f-b7df-90e96a677700\") " pod="openstack/nova-cell0-conductor-db-sync-j6nxh" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.938225 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28527c23-663e-467f-b7df-90e96a677700-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-j6nxh\" (UID: \"28527c23-663e-467f-b7df-90e96a677700\") " pod="openstack/nova-cell0-conductor-db-sync-j6nxh" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.939196 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28527c23-663e-467f-b7df-90e96a677700-scripts\") pod \"nova-cell0-conductor-db-sync-j6nxh\" (UID: \"28527c23-663e-467f-b7df-90e96a677700\") " pod="openstack/nova-cell0-conductor-db-sync-j6nxh" Feb 19 10:28:48 crc kubenswrapper[4811]: I0219 10:28:48.949883 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dcqg\" (UniqueName: \"kubernetes.io/projected/28527c23-663e-467f-b7df-90e96a677700-kube-api-access-8dcqg\") pod \"nova-cell0-conductor-db-sync-j6nxh\" (UID: \"28527c23-663e-467f-b7df-90e96a677700\") " pod="openstack/nova-cell0-conductor-db-sync-j6nxh" Feb 19 10:28:49 crc kubenswrapper[4811]: I0219 10:28:49.009501 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-j6nxh" Feb 19 10:28:49 crc kubenswrapper[4811]: I0219 10:28:49.124401 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9b8399ed-d9be-43ed-b390-1848772886c4" containerName="ceilometer-central-agent" containerID="cri-o://cbc61300c164e874e274e655761b10cad62b657817845c4096024dc305776509" gracePeriod=30 Feb 19 10:28:49 crc kubenswrapper[4811]: I0219 10:28:49.124722 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9b8399ed-d9be-43ed-b390-1848772886c4" containerName="proxy-httpd" containerID="cri-o://13dc6272b24e4ab04f47b68b9d225cec1e92124622717bc8eacc5a853f59d7fc" gracePeriod=30 Feb 19 10:28:49 crc kubenswrapper[4811]: I0219 10:28:49.124756 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-68f45d64cb-4stgq" Feb 19 10:28:49 crc kubenswrapper[4811]: I0219 10:28:49.124789 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9b8399ed-d9be-43ed-b390-1848772886c4" containerName="ceilometer-notification-agent" containerID="cri-o://ebacdea3e497af67cbff14f144c2a2137e722e99f432ecba4dbcf23b70e8423b" gracePeriod=30 Feb 19 10:28:49 crc kubenswrapper[4811]: I0219 10:28:49.124731 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68f45d64cb-4stgq" event={"ID":"1a92ef88-05db-43fe-a0bb-1d188afbae59","Type":"ContainerDied","Data":"05fec9dddb84901c1b5176dede193857309a8b0f1787b9579057119aeaee4aca"} Feb 19 10:28:49 crc kubenswrapper[4811]: I0219 10:28:49.124888 4811 scope.go:117] "RemoveContainer" containerID="d8cb62fd80072f2cbc83755c9a1d2433b446b094759e27ffd2aef4f9f647738a" Feb 19 10:28:49 crc kubenswrapper[4811]: I0219 10:28:49.124940 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9b8399ed-d9be-43ed-b390-1848772886c4" containerName="sg-core" containerID="cri-o://9a894e26346c07705d4812963e8e5a722198c93fa5a9a993f6af35a1d92e8d9c" gracePeriod=30 Feb 19 10:28:49 crc kubenswrapper[4811]: I0219 10:28:49.194535 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-68f45d64cb-4stgq"] Feb 19 10:28:49 crc kubenswrapper[4811]: I0219 10:28:49.207653 4811 scope.go:117] "RemoveContainer" containerID="e6fe3a3141d314270a76b0a597b39b2e42ebda56acc5631b73c9ce583e3d92d5" Feb 19 10:28:49 crc kubenswrapper[4811]: I0219 10:28:49.210495 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-68f45d64cb-4stgq"] Feb 19 10:28:49 crc kubenswrapper[4811]: I0219 10:28:49.535179 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-j6nxh"] Feb 19 10:28:49 crc kubenswrapper[4811]: W0219 10:28:49.539724 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28527c23_663e_467f_b7df_90e96a677700.slice/crio-68752b5102adfa64d045c97e8f2a1a2e6c81f1c9eab1d3f5efa151435560df55 WatchSource:0}: Error finding container 68752b5102adfa64d045c97e8f2a1a2e6c81f1c9eab1d3f5efa151435560df55: Status 404 returned error can't find the container with id 68752b5102adfa64d045c97e8f2a1a2e6c81f1c9eab1d3f5efa151435560df55 Feb 19 10:28:50 crc kubenswrapper[4811]: I0219 10:28:50.147698 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-j6nxh" event={"ID":"28527c23-663e-467f-b7df-90e96a677700","Type":"ContainerStarted","Data":"68752b5102adfa64d045c97e8f2a1a2e6c81f1c9eab1d3f5efa151435560df55"} Feb 19 10:28:50 crc kubenswrapper[4811]: I0219 10:28:50.162000 4811 generic.go:334] "Generic (PLEG): container finished" podID="9b8399ed-d9be-43ed-b390-1848772886c4" containerID="13dc6272b24e4ab04f47b68b9d225cec1e92124622717bc8eacc5a853f59d7fc" exitCode=0 Feb 19 10:28:50 crc kubenswrapper[4811]: I0219 10:28:50.162051 4811 generic.go:334] "Generic (PLEG): container finished" podID="9b8399ed-d9be-43ed-b390-1848772886c4" containerID="9a894e26346c07705d4812963e8e5a722198c93fa5a9a993f6af35a1d92e8d9c" exitCode=2 Feb 19 10:28:50 crc kubenswrapper[4811]: I0219 10:28:50.162061 4811 generic.go:334] "Generic (PLEG): container finished" podID="9b8399ed-d9be-43ed-b390-1848772886c4" containerID="ebacdea3e497af67cbff14f144c2a2137e722e99f432ecba4dbcf23b70e8423b" exitCode=0 Feb 19 10:28:50 crc kubenswrapper[4811]: I0219 10:28:50.162072 4811 generic.go:334] "Generic (PLEG): container finished" podID="9b8399ed-d9be-43ed-b390-1848772886c4" containerID="cbc61300c164e874e274e655761b10cad62b657817845c4096024dc305776509" exitCode=0 Feb 19 10:28:50 crc kubenswrapper[4811]: I0219 10:28:50.162086 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b8399ed-d9be-43ed-b390-1848772886c4","Type":"ContainerDied","Data":"13dc6272b24e4ab04f47b68b9d225cec1e92124622717bc8eacc5a853f59d7fc"} Feb 19 10:28:50 crc kubenswrapper[4811]: I0219 10:28:50.162155 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b8399ed-d9be-43ed-b390-1848772886c4","Type":"ContainerDied","Data":"9a894e26346c07705d4812963e8e5a722198c93fa5a9a993f6af35a1d92e8d9c"} Feb 19 10:28:50 crc kubenswrapper[4811]: I0219 10:28:50.162171 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b8399ed-d9be-43ed-b390-1848772886c4","Type":"ContainerDied","Data":"ebacdea3e497af67cbff14f144c2a2137e722e99f432ecba4dbcf23b70e8423b"} Feb 19 10:28:50 crc kubenswrapper[4811]: I0219 10:28:50.162183 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b8399ed-d9be-43ed-b390-1848772886c4","Type":"ContainerDied","Data":"cbc61300c164e874e274e655761b10cad62b657817845c4096024dc305776509"} Feb 19 10:28:50 crc kubenswrapper[4811]: I0219 10:28:50.162197 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b8399ed-d9be-43ed-b390-1848772886c4","Type":"ContainerDied","Data":"a5919cb3be109761f79cf1c00ba234685d2f8004eb044e423977a0c82925915d"} Feb 19 10:28:50 crc kubenswrapper[4811]: I0219 10:28:50.162209 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5919cb3be109761f79cf1c00ba234685d2f8004eb044e423977a0c82925915d" Feb 19 10:28:50 crc kubenswrapper[4811]: I0219 10:28:50.165966 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:28:50 crc kubenswrapper[4811]: I0219 10:28:50.257184 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b8399ed-d9be-43ed-b390-1848772886c4-scripts\") pod \"9b8399ed-d9be-43ed-b390-1848772886c4\" (UID: \"9b8399ed-d9be-43ed-b390-1848772886c4\") " Feb 19 10:28:50 crc kubenswrapper[4811]: I0219 10:28:50.257268 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b8399ed-d9be-43ed-b390-1848772886c4-run-httpd\") pod \"9b8399ed-d9be-43ed-b390-1848772886c4\" (UID: \"9b8399ed-d9be-43ed-b390-1848772886c4\") " Feb 19 10:28:50 crc kubenswrapper[4811]: I0219 10:28:50.257310 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b8399ed-d9be-43ed-b390-1848772886c4-ceilometer-tls-certs\") pod \"9b8399ed-d9be-43ed-b390-1848772886c4\" (UID: \"9b8399ed-d9be-43ed-b390-1848772886c4\") " Feb 19 10:28:50 crc kubenswrapper[4811]: I0219 10:28:50.257338 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b8399ed-d9be-43ed-b390-1848772886c4-log-httpd\") pod \"9b8399ed-d9be-43ed-b390-1848772886c4\" (UID: \"9b8399ed-d9be-43ed-b390-1848772886c4\") " Feb 19 10:28:50 crc kubenswrapper[4811]: I0219 10:28:50.257416 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b8399ed-d9be-43ed-b390-1848772886c4-sg-core-conf-yaml\") pod \"9b8399ed-d9be-43ed-b390-1848772886c4\" (UID: \"9b8399ed-d9be-43ed-b390-1848772886c4\") " Feb 19 10:28:50 crc kubenswrapper[4811]: I0219 10:28:50.258341 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b8399ed-d9be-43ed-b390-1848772886c4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9b8399ed-d9be-43ed-b390-1848772886c4" (UID: "9b8399ed-d9be-43ed-b390-1848772886c4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:28:50 crc kubenswrapper[4811]: I0219 10:28:50.258972 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b8399ed-d9be-43ed-b390-1848772886c4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9b8399ed-d9be-43ed-b390-1848772886c4" (UID: "9b8399ed-d9be-43ed-b390-1848772886c4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:28:50 crc kubenswrapper[4811]: I0219 10:28:50.261141 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b8399ed-d9be-43ed-b390-1848772886c4-config-data\") pod \"9b8399ed-d9be-43ed-b390-1848772886c4\" (UID: \"9b8399ed-d9be-43ed-b390-1848772886c4\") " Feb 19 10:28:50 crc kubenswrapper[4811]: I0219 10:28:50.261294 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kcww\" (UniqueName: \"kubernetes.io/projected/9b8399ed-d9be-43ed-b390-1848772886c4-kube-api-access-6kcww\") pod \"9b8399ed-d9be-43ed-b390-1848772886c4\" (UID: \"9b8399ed-d9be-43ed-b390-1848772886c4\") " Feb 19 10:28:50 crc kubenswrapper[4811]: I0219 10:28:50.261374 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b8399ed-d9be-43ed-b390-1848772886c4-combined-ca-bundle\") pod \"9b8399ed-d9be-43ed-b390-1848772886c4\" (UID: \"9b8399ed-d9be-43ed-b390-1848772886c4\") " Feb 19 10:28:50 crc kubenswrapper[4811]: I0219 10:28:50.262554 4811 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b8399ed-d9be-43ed-b390-1848772886c4-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:50 crc kubenswrapper[4811]: I0219 10:28:50.262581 4811 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b8399ed-d9be-43ed-b390-1848772886c4-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:50 crc kubenswrapper[4811]: I0219 10:28:50.279738 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b8399ed-d9be-43ed-b390-1848772886c4-scripts" (OuterVolumeSpecName: "scripts") pod "9b8399ed-d9be-43ed-b390-1848772886c4" (UID: "9b8399ed-d9be-43ed-b390-1848772886c4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:50 crc kubenswrapper[4811]: I0219 10:28:50.279801 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b8399ed-d9be-43ed-b390-1848772886c4-kube-api-access-6kcww" (OuterVolumeSpecName: "kube-api-access-6kcww") pod "9b8399ed-d9be-43ed-b390-1848772886c4" (UID: "9b8399ed-d9be-43ed-b390-1848772886c4"). InnerVolumeSpecName "kube-api-access-6kcww". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:28:50 crc kubenswrapper[4811]: I0219 10:28:50.308257 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b8399ed-d9be-43ed-b390-1848772886c4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9b8399ed-d9be-43ed-b390-1848772886c4" (UID: "9b8399ed-d9be-43ed-b390-1848772886c4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:50 crc kubenswrapper[4811]: I0219 10:28:50.365347 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b8399ed-d9be-43ed-b390-1848772886c4-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:50 crc kubenswrapper[4811]: I0219 10:28:50.365393 4811 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b8399ed-d9be-43ed-b390-1848772886c4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:50 crc kubenswrapper[4811]: I0219 10:28:50.365418 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kcww\" (UniqueName: \"kubernetes.io/projected/9b8399ed-d9be-43ed-b390-1848772886c4-kube-api-access-6kcww\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:50 crc kubenswrapper[4811]: I0219 10:28:50.388212 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b8399ed-d9be-43ed-b390-1848772886c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b8399ed-d9be-43ed-b390-1848772886c4" (UID: "9b8399ed-d9be-43ed-b390-1848772886c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:50 crc kubenswrapper[4811]: I0219 10:28:50.405653 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b8399ed-d9be-43ed-b390-1848772886c4-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "9b8399ed-d9be-43ed-b390-1848772886c4" (UID: "9b8399ed-d9be-43ed-b390-1848772886c4"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:50 crc kubenswrapper[4811]: I0219 10:28:50.426167 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b8399ed-d9be-43ed-b390-1848772886c4-config-data" (OuterVolumeSpecName: "config-data") pod "9b8399ed-d9be-43ed-b390-1848772886c4" (UID: "9b8399ed-d9be-43ed-b390-1848772886c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:28:50 crc kubenswrapper[4811]: I0219 10:28:50.467051 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b8399ed-d9be-43ed-b390-1848772886c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:50 crc kubenswrapper[4811]: I0219 10:28:50.467100 4811 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b8399ed-d9be-43ed-b390-1848772886c4-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:50 crc kubenswrapper[4811]: I0219 10:28:50.467112 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b8399ed-d9be-43ed-b390-1848772886c4-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:28:50 crc kubenswrapper[4811]: I0219 10:28:50.598750 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a92ef88-05db-43fe-a0bb-1d188afbae59" path="/var/lib/kubelet/pods/1a92ef88-05db-43fe-a0bb-1d188afbae59/volumes" Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.526733 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.568212 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.586416 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.611771 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:28:51 crc kubenswrapper[4811]: E0219 10:28:51.612295 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b8399ed-d9be-43ed-b390-1848772886c4" containerName="proxy-httpd" Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.612315 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b8399ed-d9be-43ed-b390-1848772886c4" containerName="proxy-httpd" Feb 19 10:28:51 crc kubenswrapper[4811]: E0219 10:28:51.612331 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b8399ed-d9be-43ed-b390-1848772886c4" containerName="ceilometer-notification-agent" Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.612339 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b8399ed-d9be-43ed-b390-1848772886c4" containerName="ceilometer-notification-agent" Feb 19 10:28:51 crc kubenswrapper[4811]: E0219 10:28:51.612349 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b8399ed-d9be-43ed-b390-1848772886c4" containerName="sg-core" Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.612355 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b8399ed-d9be-43ed-b390-1848772886c4" containerName="sg-core" Feb 19 10:28:51 crc kubenswrapper[4811]: E0219 10:28:51.612387 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b8399ed-d9be-43ed-b390-1848772886c4" containerName="ceilometer-central-agent" Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.612396 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b8399ed-d9be-43ed-b390-1848772886c4" containerName="ceilometer-central-agent" Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.612636 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b8399ed-d9be-43ed-b390-1848772886c4" containerName="ceilometer-central-agent" Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.612657 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b8399ed-d9be-43ed-b390-1848772886c4" containerName="sg-core" Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.612670 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b8399ed-d9be-43ed-b390-1848772886c4" containerName="proxy-httpd" Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.612684 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b8399ed-d9be-43ed-b390-1848772886c4" containerName="ceilometer-notification-agent" Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.614438 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.628662 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.629133 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.629316 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.646203 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.797853 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc5ca584-480d-4418-8c44-1642a4e13286-config-data\") pod \"ceilometer-0\" (UID: \"cc5ca584-480d-4418-8c44-1642a4e13286\") " pod="openstack/ceilometer-0" Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.798197 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc5ca584-480d-4418-8c44-1642a4e13286-scripts\") pod \"ceilometer-0\" (UID: \"cc5ca584-480d-4418-8c44-1642a4e13286\") " pod="openstack/ceilometer-0" Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.798255 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc5ca584-480d-4418-8c44-1642a4e13286-log-httpd\") pod \"ceilometer-0\" (UID: \"cc5ca584-480d-4418-8c44-1642a4e13286\") " pod="openstack/ceilometer-0" Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.798286 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc5ca584-480d-4418-8c44-1642a4e13286-run-httpd\") pod \"ceilometer-0\" (UID: \"cc5ca584-480d-4418-8c44-1642a4e13286\") " pod="openstack/ceilometer-0" Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.798401 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc5ca584-480d-4418-8c44-1642a4e13286-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cc5ca584-480d-4418-8c44-1642a4e13286\") " pod="openstack/ceilometer-0" Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.798443 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltf9w\" (UniqueName: \"kubernetes.io/projected/cc5ca584-480d-4418-8c44-1642a4e13286-kube-api-access-ltf9w\") pod \"ceilometer-0\" (UID: \"cc5ca584-480d-4418-8c44-1642a4e13286\") " pod="openstack/ceilometer-0" Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.798479 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc5ca584-480d-4418-8c44-1642a4e13286-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cc5ca584-480d-4418-8c44-1642a4e13286\") " pod="openstack/ceilometer-0" Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.798501 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc5ca584-480d-4418-8c44-1642a4e13286-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cc5ca584-480d-4418-8c44-1642a4e13286\") " pod="openstack/ceilometer-0" Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.901253 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltf9w\" (UniqueName: \"kubernetes.io/projected/cc5ca584-480d-4418-8c44-1642a4e13286-kube-api-access-ltf9w\") pod \"ceilometer-0\" (UID: \"cc5ca584-480d-4418-8c44-1642a4e13286\") " pod="openstack/ceilometer-0" Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.901878 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc5ca584-480d-4418-8c44-1642a4e13286-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cc5ca584-480d-4418-8c44-1642a4e13286\") " pod="openstack/ceilometer-0" Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.901925 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc5ca584-480d-4418-8c44-1642a4e13286-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cc5ca584-480d-4418-8c44-1642a4e13286\") " pod="openstack/ceilometer-0" Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.901997 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc5ca584-480d-4418-8c44-1642a4e13286-config-data\") pod \"ceilometer-0\" (UID: \"cc5ca584-480d-4418-8c44-1642a4e13286\") " pod="openstack/ceilometer-0" Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.902113 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc5ca584-480d-4418-8c44-1642a4e13286-scripts\") pod \"ceilometer-0\" (UID: \"cc5ca584-480d-4418-8c44-1642a4e13286\") " pod="openstack/ceilometer-0" Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.902195 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc5ca584-480d-4418-8c44-1642a4e13286-log-httpd\") pod \"ceilometer-0\" (UID: \"cc5ca584-480d-4418-8c44-1642a4e13286\") " pod="openstack/ceilometer-0" Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.902221 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc5ca584-480d-4418-8c44-1642a4e13286-run-httpd\") pod \"ceilometer-0\" (UID: \"cc5ca584-480d-4418-8c44-1642a4e13286\") " pod="openstack/ceilometer-0" Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.902288 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc5ca584-480d-4418-8c44-1642a4e13286-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cc5ca584-480d-4418-8c44-1642a4e13286\") " pod="openstack/ceilometer-0" Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.913250 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc5ca584-480d-4418-8c44-1642a4e13286-run-httpd\") pod \"ceilometer-0\" (UID: \"cc5ca584-480d-4418-8c44-1642a4e13286\") " pod="openstack/ceilometer-0" Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.913719 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc5ca584-480d-4418-8c44-1642a4e13286-log-httpd\") pod \"ceilometer-0\" (UID: \"cc5ca584-480d-4418-8c44-1642a4e13286\") " pod="openstack/ceilometer-0" Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.922178 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc5ca584-480d-4418-8c44-1642a4e13286-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cc5ca584-480d-4418-8c44-1642a4e13286\") " pod="openstack/ceilometer-0" Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.922776 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc5ca584-480d-4418-8c44-1642a4e13286-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cc5ca584-480d-4418-8c44-1642a4e13286\") " pod="openstack/ceilometer-0" Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.928322 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc5ca584-480d-4418-8c44-1642a4e13286-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cc5ca584-480d-4418-8c44-1642a4e13286\") " pod="openstack/ceilometer-0" Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.930048 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc5ca584-480d-4418-8c44-1642a4e13286-scripts\") pod \"ceilometer-0\" (UID: \"cc5ca584-480d-4418-8c44-1642a4e13286\") " pod="openstack/ceilometer-0" Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.931261 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc5ca584-480d-4418-8c44-1642a4e13286-config-data\") pod \"ceilometer-0\" (UID: \"cc5ca584-480d-4418-8c44-1642a4e13286\") " pod="openstack/ceilometer-0" Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.939687 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltf9w\" (UniqueName: \"kubernetes.io/projected/cc5ca584-480d-4418-8c44-1642a4e13286-kube-api-access-ltf9w\") pod \"ceilometer-0\" (UID: \"cc5ca584-480d-4418-8c44-1642a4e13286\") " pod="openstack/ceilometer-0" Feb 19 10:28:51 crc kubenswrapper[4811]: I0219 10:28:51.948192 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:28:52 crc kubenswrapper[4811]: I0219 10:28:52.607041 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b8399ed-d9be-43ed-b390-1848772886c4" path="/var/lib/kubelet/pods/9b8399ed-d9be-43ed-b390-1848772886c4/volumes" Feb 19 10:28:52 crc kubenswrapper[4811]: I0219 10:28:52.610508 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:28:53 crc kubenswrapper[4811]: I0219 10:28:53.214858 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:28:53 crc kubenswrapper[4811]: I0219 10:28:53.564261 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc5ca584-480d-4418-8c44-1642a4e13286","Type":"ContainerStarted","Data":"b47a75f1663170ff16642c61508edfbab8be412c4aacab0d20da336887ee09b4"} Feb 19 10:28:53 crc kubenswrapper[4811]: I0219 10:28:53.564339 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc5ca584-480d-4418-8c44-1642a4e13286","Type":"ContainerStarted","Data":"d0d9bf67d5055691b21c703a33c0faa041e356c86a8c00ab8c4a123cde9bca34"} Feb 19 10:28:54 crc kubenswrapper[4811]: I0219 10:28:54.573388 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc5ca584-480d-4418-8c44-1642a4e13286","Type":"ContainerStarted","Data":"a473faa60240a715dfb60657385ad24a5599a062686e0d08481fbd300e417d02"} Feb 19 10:28:55 crc kubenswrapper[4811]: I0219 10:28:55.413691 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-766dfcbd56-lqbrt" podUID="4ab45e9b-6cfc-48a3-b603-f47d47bbd510" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.144:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.144:8443: connect: connection refused" Feb 19 10:28:55 crc kubenswrapper[4811]: I0219 10:28:55.580475 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-676dcb6496-tnhmt" podUID="19d17501-8d5f-4f9d-8fad-94f48224e910" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Feb 19 10:28:56 crc kubenswrapper[4811]: I0219 10:28:56.157641 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:28:56 crc kubenswrapper[4811]: I0219 10:28:56.158022 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d3322db9-30c9-4f53-af57-f5a45c7f9950" containerName="glance-log" containerID="cri-o://fb88796dd5915e8d36949bf388c2c0147a3ae0ad76e34cf9eba67222a6140c0d" gracePeriod=30 Feb 19 10:28:56 crc kubenswrapper[4811]: I0219 10:28:56.158129 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d3322db9-30c9-4f53-af57-f5a45c7f9950" containerName="glance-httpd" containerID="cri-o://5b1790bb12b9820123c9a97faa6e082b8522f146c8aea05c5ebdb448083c6d18" gracePeriod=30 Feb 19 10:28:56 crc kubenswrapper[4811]: I0219 10:28:56.597074 4811 generic.go:334] "Generic (PLEG): container finished" podID="d3322db9-30c9-4f53-af57-f5a45c7f9950" containerID="fb88796dd5915e8d36949bf388c2c0147a3ae0ad76e34cf9eba67222a6140c0d" exitCode=143 Feb 19 10:28:56 crc kubenswrapper[4811]: I0219 10:28:56.597561 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d3322db9-30c9-4f53-af57-f5a45c7f9950","Type":"ContainerDied","Data":"fb88796dd5915e8d36949bf388c2c0147a3ae0ad76e34cf9eba67222a6140c0d"} Feb 19 10:28:56 crc kubenswrapper[4811]: I0219 10:28:56.788613 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:28:56 crc kubenswrapper[4811]: I0219 10:28:56.788722 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:28:58 crc kubenswrapper[4811]: I0219 10:28:58.659142 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:28:58 crc kubenswrapper[4811]: I0219 10:28:58.659932 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8ddb9c94-9f78-447b-b65d-93cd67081226" containerName="glance-log" containerID="cri-o://da8e033020ef4c4bb22c4e2350708e3908597ed93b4c73737cdfd1bd0a73768f" gracePeriod=30 Feb 19 10:28:58 crc kubenswrapper[4811]: I0219 10:28:58.660571 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8ddb9c94-9f78-447b-b65d-93cd67081226" containerName="glance-httpd" containerID="cri-o://ec8e61ff34c251df2d0255f1a318fe8613e4a234d6bca515d97b64092128f36c" gracePeriod=30 Feb 19 10:28:59 crc kubenswrapper[4811]: I0219 10:28:59.652896 4811 generic.go:334] "Generic (PLEG): container finished" podID="8ddb9c94-9f78-447b-b65d-93cd67081226" containerID="da8e033020ef4c4bb22c4e2350708e3908597ed93b4c73737cdfd1bd0a73768f" exitCode=143 Feb 19 10:28:59 crc kubenswrapper[4811]: I0219 10:28:59.652984 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8ddb9c94-9f78-447b-b65d-93cd67081226","Type":"ContainerDied","Data":"da8e033020ef4c4bb22c4e2350708e3908597ed93b4c73737cdfd1bd0a73768f"} Feb 19 10:28:59 crc kubenswrapper[4811]: I0219 10:28:59.658790 4811 generic.go:334] "Generic (PLEG): container finished" podID="d3322db9-30c9-4f53-af57-f5a45c7f9950" containerID="5b1790bb12b9820123c9a97faa6e082b8522f146c8aea05c5ebdb448083c6d18" exitCode=0 Feb 19 10:28:59 crc kubenswrapper[4811]: I0219 10:28:59.658843 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d3322db9-30c9-4f53-af57-f5a45c7f9950","Type":"ContainerDied","Data":"5b1790bb12b9820123c9a97faa6e082b8522f146c8aea05c5ebdb448083c6d18"} Feb 19 10:29:01 crc kubenswrapper[4811]: I0219 10:29:01.681039 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d3322db9-30c9-4f53-af57-f5a45c7f9950","Type":"ContainerDied","Data":"a54451043890b2afafaabd04e380c506090ed088c2130f516b24254f0c0ee27a"} Feb 19 10:29:01 crc kubenswrapper[4811]: I0219 10:29:01.681584 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a54451043890b2afafaabd04e380c506090ed088c2130f516b24254f0c0ee27a" Feb 19 10:29:01 crc kubenswrapper[4811]: I0219 10:29:01.714375 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 10:29:01 crc kubenswrapper[4811]: I0219 10:29:01.852372 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp7pb\" (UniqueName: \"kubernetes.io/projected/d3322db9-30c9-4f53-af57-f5a45c7f9950-kube-api-access-pp7pb\") pod \"d3322db9-30c9-4f53-af57-f5a45c7f9950\" (UID: \"d3322db9-30c9-4f53-af57-f5a45c7f9950\") " Feb 19 10:29:01 crc kubenswrapper[4811]: I0219 10:29:01.852533 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3322db9-30c9-4f53-af57-f5a45c7f9950-config-data\") pod \"d3322db9-30c9-4f53-af57-f5a45c7f9950\" (UID: \"d3322db9-30c9-4f53-af57-f5a45c7f9950\") " Feb 19 10:29:01 crc kubenswrapper[4811]: I0219 10:29:01.852683 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"d3322db9-30c9-4f53-af57-f5a45c7f9950\" (UID: \"d3322db9-30c9-4f53-af57-f5a45c7f9950\") " Feb 19 10:29:01 crc kubenswrapper[4811]: I0219 10:29:01.852733 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d3322db9-30c9-4f53-af57-f5a45c7f9950-httpd-run\") pod \"d3322db9-30c9-4f53-af57-f5a45c7f9950\" (UID: \"d3322db9-30c9-4f53-af57-f5a45c7f9950\") " Feb 19 10:29:01 crc kubenswrapper[4811]: I0219 10:29:01.852764 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3322db9-30c9-4f53-af57-f5a45c7f9950-combined-ca-bundle\") pod \"d3322db9-30c9-4f53-af57-f5a45c7f9950\" (UID: \"d3322db9-30c9-4f53-af57-f5a45c7f9950\") " Feb 19 10:29:01 crc kubenswrapper[4811]: I0219 10:29:01.852787 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3322db9-30c9-4f53-af57-f5a45c7f9950-logs\") pod \"d3322db9-30c9-4f53-af57-f5a45c7f9950\" (UID: \"d3322db9-30c9-4f53-af57-f5a45c7f9950\") " Feb 19 10:29:01 crc kubenswrapper[4811]: I0219 10:29:01.852862 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3322db9-30c9-4f53-af57-f5a45c7f9950-public-tls-certs\") pod \"d3322db9-30c9-4f53-af57-f5a45c7f9950\" (UID: \"d3322db9-30c9-4f53-af57-f5a45c7f9950\") " Feb 19 10:29:01 crc kubenswrapper[4811]: I0219 10:29:01.852925 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3322db9-30c9-4f53-af57-f5a45c7f9950-scripts\") pod \"d3322db9-30c9-4f53-af57-f5a45c7f9950\" (UID: \"d3322db9-30c9-4f53-af57-f5a45c7f9950\") " Feb 19 10:29:01 crc kubenswrapper[4811]: I0219 10:29:01.853701 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3322db9-30c9-4f53-af57-f5a45c7f9950-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d3322db9-30c9-4f53-af57-f5a45c7f9950" (UID: "d3322db9-30c9-4f53-af57-f5a45c7f9950"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:29:01 crc kubenswrapper[4811]: I0219 10:29:01.854082 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3322db9-30c9-4f53-af57-f5a45c7f9950-logs" (OuterVolumeSpecName: "logs") pod "d3322db9-30c9-4f53-af57-f5a45c7f9950" (UID: "d3322db9-30c9-4f53-af57-f5a45c7f9950"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:29:01 crc kubenswrapper[4811]: I0219 10:29:01.869296 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3322db9-30c9-4f53-af57-f5a45c7f9950-scripts" (OuterVolumeSpecName: "scripts") pod "d3322db9-30c9-4f53-af57-f5a45c7f9950" (UID: "d3322db9-30c9-4f53-af57-f5a45c7f9950"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:01 crc kubenswrapper[4811]: I0219 10:29:01.873913 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3322db9-30c9-4f53-af57-f5a45c7f9950-kube-api-access-pp7pb" (OuterVolumeSpecName: "kube-api-access-pp7pb") pod "d3322db9-30c9-4f53-af57-f5a45c7f9950" (UID: "d3322db9-30c9-4f53-af57-f5a45c7f9950"). InnerVolumeSpecName "kube-api-access-pp7pb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:29:01 crc kubenswrapper[4811]: I0219 10:29:01.883715 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "d3322db9-30c9-4f53-af57-f5a45c7f9950" (UID: "d3322db9-30c9-4f53-af57-f5a45c7f9950"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 10:29:01 crc kubenswrapper[4811]: I0219 10:29:01.955651 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3322db9-30c9-4f53-af57-f5a45c7f9950-config-data" (OuterVolumeSpecName: "config-data") pod "d3322db9-30c9-4f53-af57-f5a45c7f9950" (UID: "d3322db9-30c9-4f53-af57-f5a45c7f9950"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:01 crc kubenswrapper[4811]: I0219 10:29:01.956037 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3322db9-30c9-4f53-af57-f5a45c7f9950-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:01 crc kubenswrapper[4811]: I0219 10:29:01.956067 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3322db9-30c9-4f53-af57-f5a45c7f9950-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:01 crc kubenswrapper[4811]: I0219 10:29:01.956077 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp7pb\" (UniqueName: \"kubernetes.io/projected/d3322db9-30c9-4f53-af57-f5a45c7f9950-kube-api-access-pp7pb\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:01 crc kubenswrapper[4811]: I0219 10:29:01.956088 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3322db9-30c9-4f53-af57-f5a45c7f9950-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:01 crc kubenswrapper[4811]: I0219 10:29:01.956118 4811 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 19 10:29:01 crc kubenswrapper[4811]: I0219 10:29:01.956128 4811 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d3322db9-30c9-4f53-af57-f5a45c7f9950-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:01 crc kubenswrapper[4811]: I0219 10:29:01.975437 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3322db9-30c9-4f53-af57-f5a45c7f9950-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3322db9-30c9-4f53-af57-f5a45c7f9950" (UID: "d3322db9-30c9-4f53-af57-f5a45c7f9950"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:01 crc kubenswrapper[4811]: I0219 10:29:01.994073 4811 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.006219 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3322db9-30c9-4f53-af57-f5a45c7f9950-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d3322db9-30c9-4f53-af57-f5a45c7f9950" (UID: "d3322db9-30c9-4f53-af57-f5a45c7f9950"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.057903 4811 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.058481 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3322db9-30c9-4f53-af57-f5a45c7f9950-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.058550 4811 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3322db9-30c9-4f53-af57-f5a45c7f9950-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.697605 4811 generic.go:334] "Generic (PLEG): container finished" podID="8ddb9c94-9f78-447b-b65d-93cd67081226" containerID="ec8e61ff34c251df2d0255f1a318fe8613e4a234d6bca515d97b64092128f36c" exitCode=0 Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.697655 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8ddb9c94-9f78-447b-b65d-93cd67081226","Type":"ContainerDied","Data":"ec8e61ff34c251df2d0255f1a318fe8613e4a234d6bca515d97b64092128f36c"} Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.697740 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8ddb9c94-9f78-447b-b65d-93cd67081226","Type":"ContainerDied","Data":"29924206aa7fae8aac994e8d609b5d90117af2726ef6bce6f8a366dab8474b4a"} Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.697755 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29924206aa7fae8aac994e8d609b5d90117af2726ef6bce6f8a366dab8474b4a" Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.700301 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-j6nxh" event={"ID":"28527c23-663e-467f-b7df-90e96a677700","Type":"ContainerStarted","Data":"272b02e6235c05038091dc13a6a228f3743e79220b6e507e629029169e307b73"} Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.706277 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc5ca584-480d-4418-8c44-1642a4e13286","Type":"ContainerStarted","Data":"3be5c1b98c4fc123d6aa7b98e74277f582fd5e5c90fa2d0a1e43b0ad48c9656a"} Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.706364 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.727912 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-j6nxh" podStartSLOduration=2.751438908 podStartE2EDuration="14.727879573s" podCreationTimestamp="2026-02-19 10:28:48 +0000 UTC" firstStartedPulling="2026-02-19 10:28:49.542140586 +0000 UTC m=+1218.068605272" lastFinishedPulling="2026-02-19 10:29:01.518581251 +0000 UTC m=+1230.045045937" observedRunningTime="2026-02-19 10:29:02.71952816 +0000 UTC m=+1231.245992856" watchObservedRunningTime="2026-02-19 10:29:02.727879573 +0000 UTC m=+1231.254344259" Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.731564 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.751053 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.761303 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.776153 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:29:02 crc kubenswrapper[4811]: E0219 10:29:02.776715 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3322db9-30c9-4f53-af57-f5a45c7f9950" containerName="glance-log" Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.776739 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3322db9-30c9-4f53-af57-f5a45c7f9950" containerName="glance-log" Feb 19 10:29:02 crc kubenswrapper[4811]: E0219 10:29:02.776754 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ddb9c94-9f78-447b-b65d-93cd67081226" containerName="glance-log" Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.776762 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ddb9c94-9f78-447b-b65d-93cd67081226" containerName="glance-log" Feb 19 10:29:02 crc kubenswrapper[4811]: E0219 10:29:02.776774 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3322db9-30c9-4f53-af57-f5a45c7f9950" containerName="glance-httpd" Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.776781 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3322db9-30c9-4f53-af57-f5a45c7f9950" containerName="glance-httpd" Feb 19 10:29:02 crc kubenswrapper[4811]: E0219 10:29:02.776804 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ddb9c94-9f78-447b-b65d-93cd67081226" containerName="glance-httpd" Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.776810 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ddb9c94-9f78-447b-b65d-93cd67081226" containerName="glance-httpd" Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.777028 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ddb9c94-9f78-447b-b65d-93cd67081226" containerName="glance-log" Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.777051 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3322db9-30c9-4f53-af57-f5a45c7f9950" containerName="glance-httpd" Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.777062 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ddb9c94-9f78-447b-b65d-93cd67081226" containerName="glance-httpd" Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.777076 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3322db9-30c9-4f53-af57-f5a45c7f9950" containerName="glance-log" Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.778334 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.783416 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.783774 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.789306 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.899393 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ddb9c94-9f78-447b-b65d-93cd67081226-logs\") pod \"8ddb9c94-9f78-447b-b65d-93cd67081226\" (UID: \"8ddb9c94-9f78-447b-b65d-93cd67081226\") " Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.899497 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ddb9c94-9f78-447b-b65d-93cd67081226-combined-ca-bundle\") pod \"8ddb9c94-9f78-447b-b65d-93cd67081226\" (UID: \"8ddb9c94-9f78-447b-b65d-93cd67081226\") " Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.899581 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6tqh\" (UniqueName: \"kubernetes.io/projected/8ddb9c94-9f78-447b-b65d-93cd67081226-kube-api-access-s6tqh\") pod \"8ddb9c94-9f78-447b-b65d-93cd67081226\" (UID: \"8ddb9c94-9f78-447b-b65d-93cd67081226\") " Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.899715 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8ddb9c94-9f78-447b-b65d-93cd67081226-httpd-run\") pod \"8ddb9c94-9f78-447b-b65d-93cd67081226\" (UID: \"8ddb9c94-9f78-447b-b65d-93cd67081226\") " Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.899780 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ddb9c94-9f78-447b-b65d-93cd67081226-config-data\") pod \"8ddb9c94-9f78-447b-b65d-93cd67081226\" (UID: \"8ddb9c94-9f78-447b-b65d-93cd67081226\") " Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.899874 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"8ddb9c94-9f78-447b-b65d-93cd67081226\" (UID: \"8ddb9c94-9f78-447b-b65d-93cd67081226\") " Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.899924 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ddb9c94-9f78-447b-b65d-93cd67081226-internal-tls-certs\") pod \"8ddb9c94-9f78-447b-b65d-93cd67081226\" (UID: \"8ddb9c94-9f78-447b-b65d-93cd67081226\") " Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.899961 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ddb9c94-9f78-447b-b65d-93cd67081226-scripts\") pod \"8ddb9c94-9f78-447b-b65d-93cd67081226\" (UID: \"8ddb9c94-9f78-447b-b65d-93cd67081226\") " Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.900303 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a784552-f4b7-42d3-a0b7-d223fbe3818c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5a784552-f4b7-42d3-a0b7-d223fbe3818c\") " pod="openstack/glance-default-external-api-0" Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.900433 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a784552-f4b7-42d3-a0b7-d223fbe3818c-config-data\") pod \"glance-default-external-api-0\" (UID: \"5a784552-f4b7-42d3-a0b7-d223fbe3818c\") " pod="openstack/glance-default-external-api-0" Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.900479 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"5a784552-f4b7-42d3-a0b7-d223fbe3818c\") " pod="openstack/glance-default-external-api-0" Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.900508 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a784552-f4b7-42d3-a0b7-d223fbe3818c-scripts\") pod \"glance-default-external-api-0\" (UID: \"5a784552-f4b7-42d3-a0b7-d223fbe3818c\") " pod="openstack/glance-default-external-api-0" Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.900565 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmfhk\" (UniqueName: \"kubernetes.io/projected/5a784552-f4b7-42d3-a0b7-d223fbe3818c-kube-api-access-rmfhk\") pod \"glance-default-external-api-0\" (UID: \"5a784552-f4b7-42d3-a0b7-d223fbe3818c\") " pod="openstack/glance-default-external-api-0" Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.900594 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5a784552-f4b7-42d3-a0b7-d223fbe3818c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5a784552-f4b7-42d3-a0b7-d223fbe3818c\") " pod="openstack/glance-default-external-api-0" Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.900623 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a784552-f4b7-42d3-a0b7-d223fbe3818c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5a784552-f4b7-42d3-a0b7-d223fbe3818c\") " pod="openstack/glance-default-external-api-0" Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.900678 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a784552-f4b7-42d3-a0b7-d223fbe3818c-logs\") pod \"glance-default-external-api-0\" (UID: \"5a784552-f4b7-42d3-a0b7-d223fbe3818c\") " pod="openstack/glance-default-external-api-0" Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.902593 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ddb9c94-9f78-447b-b65d-93cd67081226-logs" (OuterVolumeSpecName: "logs") pod "8ddb9c94-9f78-447b-b65d-93cd67081226" (UID: "8ddb9c94-9f78-447b-b65d-93cd67081226"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.905567 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ddb9c94-9f78-447b-b65d-93cd67081226-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8ddb9c94-9f78-447b-b65d-93cd67081226" (UID: "8ddb9c94-9f78-447b-b65d-93cd67081226"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.911519 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "8ddb9c94-9f78-447b-b65d-93cd67081226" (UID: "8ddb9c94-9f78-447b-b65d-93cd67081226"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.913987 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ddb9c94-9f78-447b-b65d-93cd67081226-scripts" (OuterVolumeSpecName: "scripts") pod "8ddb9c94-9f78-447b-b65d-93cd67081226" (UID: "8ddb9c94-9f78-447b-b65d-93cd67081226"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.918971 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ddb9c94-9f78-447b-b65d-93cd67081226-kube-api-access-s6tqh" (OuterVolumeSpecName: "kube-api-access-s6tqh") pod "8ddb9c94-9f78-447b-b65d-93cd67081226" (UID: "8ddb9c94-9f78-447b-b65d-93cd67081226"). InnerVolumeSpecName "kube-api-access-s6tqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:29:02 crc kubenswrapper[4811]: I0219 10:29:02.956582 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ddb9c94-9f78-447b-b65d-93cd67081226-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ddb9c94-9f78-447b-b65d-93cd67081226" (UID: "8ddb9c94-9f78-447b-b65d-93cd67081226"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.003957 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a784552-f4b7-42d3-a0b7-d223fbe3818c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5a784552-f4b7-42d3-a0b7-d223fbe3818c\") " pod="openstack/glance-default-external-api-0" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.004089 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a784552-f4b7-42d3-a0b7-d223fbe3818c-config-data\") pod \"glance-default-external-api-0\" (UID: \"5a784552-f4b7-42d3-a0b7-d223fbe3818c\") " pod="openstack/glance-default-external-api-0" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.004123 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"5a784552-f4b7-42d3-a0b7-d223fbe3818c\") " pod="openstack/glance-default-external-api-0" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.004152 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a784552-f4b7-42d3-a0b7-d223fbe3818c-scripts\") pod \"glance-default-external-api-0\" (UID: \"5a784552-f4b7-42d3-a0b7-d223fbe3818c\") " pod="openstack/glance-default-external-api-0" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.004196 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmfhk\" (UniqueName: \"kubernetes.io/projected/5a784552-f4b7-42d3-a0b7-d223fbe3818c-kube-api-access-rmfhk\") pod \"glance-default-external-api-0\" (UID: \"5a784552-f4b7-42d3-a0b7-d223fbe3818c\") " pod="openstack/glance-default-external-api-0" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.004222 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5a784552-f4b7-42d3-a0b7-d223fbe3818c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5a784552-f4b7-42d3-a0b7-d223fbe3818c\") " pod="openstack/glance-default-external-api-0" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.004250 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a784552-f4b7-42d3-a0b7-d223fbe3818c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5a784552-f4b7-42d3-a0b7-d223fbe3818c\") " pod="openstack/glance-default-external-api-0" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.004289 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a784552-f4b7-42d3-a0b7-d223fbe3818c-logs\") pod \"glance-default-external-api-0\" (UID: \"5a784552-f4b7-42d3-a0b7-d223fbe3818c\") " pod="openstack/glance-default-external-api-0" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.004365 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ddb9c94-9f78-447b-b65d-93cd67081226-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.004376 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ddb9c94-9f78-447b-b65d-93cd67081226-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.004389 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6tqh\" (UniqueName: \"kubernetes.io/projected/8ddb9c94-9f78-447b-b65d-93cd67081226-kube-api-access-s6tqh\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.004398 4811 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8ddb9c94-9f78-447b-b65d-93cd67081226-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.004424 4811 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.004435 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ddb9c94-9f78-447b-b65d-93cd67081226-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.006784 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5a784552-f4b7-42d3-a0b7-d223fbe3818c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5a784552-f4b7-42d3-a0b7-d223fbe3818c\") " pod="openstack/glance-default-external-api-0" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.007732 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"5a784552-f4b7-42d3-a0b7-d223fbe3818c\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.009345 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a784552-f4b7-42d3-a0b7-d223fbe3818c-logs\") pod \"glance-default-external-api-0\" (UID: \"5a784552-f4b7-42d3-a0b7-d223fbe3818c\") " pod="openstack/glance-default-external-api-0" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.031575 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a784552-f4b7-42d3-a0b7-d223fbe3818c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5a784552-f4b7-42d3-a0b7-d223fbe3818c\") " pod="openstack/glance-default-external-api-0" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.034780 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ddb9c94-9f78-447b-b65d-93cd67081226-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8ddb9c94-9f78-447b-b65d-93cd67081226" (UID: "8ddb9c94-9f78-447b-b65d-93cd67081226"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.040509 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a784552-f4b7-42d3-a0b7-d223fbe3818c-scripts\") pod \"glance-default-external-api-0\" (UID: \"5a784552-f4b7-42d3-a0b7-d223fbe3818c\") " pod="openstack/glance-default-external-api-0" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.040834 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a784552-f4b7-42d3-a0b7-d223fbe3818c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5a784552-f4b7-42d3-a0b7-d223fbe3818c\") " pod="openstack/glance-default-external-api-0" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.041083 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmfhk\" (UniqueName: \"kubernetes.io/projected/5a784552-f4b7-42d3-a0b7-d223fbe3818c-kube-api-access-rmfhk\") pod \"glance-default-external-api-0\" (UID: \"5a784552-f4b7-42d3-a0b7-d223fbe3818c\") " pod="openstack/glance-default-external-api-0" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.049988 4811 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.063876 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ddb9c94-9f78-447b-b65d-93cd67081226-config-data" (OuterVolumeSpecName: "config-data") pod "8ddb9c94-9f78-447b-b65d-93cd67081226" (UID: "8ddb9c94-9f78-447b-b65d-93cd67081226"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.065305 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a784552-f4b7-42d3-a0b7-d223fbe3818c-config-data\") pod \"glance-default-external-api-0\" (UID: \"5a784552-f4b7-42d3-a0b7-d223fbe3818c\") " pod="openstack/glance-default-external-api-0" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.074194 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"5a784552-f4b7-42d3-a0b7-d223fbe3818c\") " pod="openstack/glance-default-external-api-0" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.101516 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.109380 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ddb9c94-9f78-447b-b65d-93cd67081226-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.109433 4811 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.109461 4811 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ddb9c94-9f78-447b-b65d-93cd67081226-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.716812 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.744353 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.778515 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.791961 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.818062 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.823125 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.827367 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.828017 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.830998 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.931179 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7-logs\") pod \"glance-default-internal-api-0\" (UID: \"fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.931230 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.931280 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.931527 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.931614 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.931857 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqk4p\" (UniqueName: \"kubernetes.io/projected/fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7-kube-api-access-pqk4p\") pod \"glance-default-internal-api-0\" (UID: \"fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.931968 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:29:03 crc kubenswrapper[4811]: I0219 10:29:03.932057 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:29:04 crc kubenswrapper[4811]: I0219 10:29:04.034122 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqk4p\" (UniqueName: \"kubernetes.io/projected/fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7-kube-api-access-pqk4p\") pod \"glance-default-internal-api-0\" (UID: \"fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:29:04 crc kubenswrapper[4811]: I0219 10:29:04.034205 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:29:04 crc kubenswrapper[4811]: I0219 10:29:04.034252 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:29:04 crc kubenswrapper[4811]: I0219 10:29:04.034344 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7-logs\") pod \"glance-default-internal-api-0\" (UID: \"fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:29:04 crc kubenswrapper[4811]: I0219 10:29:04.034376 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:29:04 crc kubenswrapper[4811]: I0219 10:29:04.034442 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:29:04 crc kubenswrapper[4811]: I0219 10:29:04.034526 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:29:04 crc kubenswrapper[4811]: I0219 10:29:04.034556 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:29:04 crc kubenswrapper[4811]: I0219 10:29:04.035033 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:29:04 crc kubenswrapper[4811]: I0219 10:29:04.036115 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Feb 19 10:29:04 crc kubenswrapper[4811]: I0219 10:29:04.037003 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7-logs\") pod \"glance-default-internal-api-0\" (UID: \"fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:29:04 crc kubenswrapper[4811]: I0219 10:29:04.043431 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:29:04 crc kubenswrapper[4811]: I0219 10:29:04.044574 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:29:04 crc kubenswrapper[4811]: I0219 10:29:04.045911 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:29:04 crc kubenswrapper[4811]: I0219 10:29:04.050085 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:29:04 crc kubenswrapper[4811]: I0219 10:29:04.061960 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqk4p\" (UniqueName: \"kubernetes.io/projected/fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7-kube-api-access-pqk4p\") pod \"glance-default-internal-api-0\" (UID: \"fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:29:04 crc kubenswrapper[4811]: I0219 10:29:04.109961 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7\") " pod="openstack/glance-default-internal-api-0" Feb 19 10:29:04 crc kubenswrapper[4811]: I0219 10:29:04.157922 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 10:29:04 crc kubenswrapper[4811]: I0219 10:29:04.612995 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ddb9c94-9f78-447b-b65d-93cd67081226" path="/var/lib/kubelet/pods/8ddb9c94-9f78-447b-b65d-93cd67081226/volumes" Feb 19 10:29:04 crc kubenswrapper[4811]: I0219 10:29:04.617430 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3322db9-30c9-4f53-af57-f5a45c7f9950" path="/var/lib/kubelet/pods/d3322db9-30c9-4f53-af57-f5a45c7f9950/volumes" Feb 19 10:29:04 crc kubenswrapper[4811]: I0219 10:29:04.764945 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5a784552-f4b7-42d3-a0b7-d223fbe3818c","Type":"ContainerStarted","Data":"dbc168cdf298c69b6e1c6fb78d378f76fef2559ff90ead8f08c34dab4fd3b59d"} Feb 19 10:29:04 crc kubenswrapper[4811]: I0219 10:29:04.853428 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 10:29:05 crc kubenswrapper[4811]: I0219 10:29:05.791036 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc5ca584-480d-4418-8c44-1642a4e13286","Type":"ContainerStarted","Data":"14ecd677175c83fb6fe18266cfad4ab360f780e242a41ef81f275a89ab78c55e"} Feb 19 10:29:05 crc kubenswrapper[4811]: I0219 10:29:05.791933 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 10:29:05 crc kubenswrapper[4811]: I0219 10:29:05.791502 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cc5ca584-480d-4418-8c44-1642a4e13286" containerName="ceilometer-central-agent" containerID="cri-o://b47a75f1663170ff16642c61508edfbab8be412c4aacab0d20da336887ee09b4" gracePeriod=30 Feb 19 10:29:05 crc kubenswrapper[4811]: I0219 10:29:05.792077 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cc5ca584-480d-4418-8c44-1642a4e13286" containerName="proxy-httpd" containerID="cri-o://14ecd677175c83fb6fe18266cfad4ab360f780e242a41ef81f275a89ab78c55e" gracePeriod=30 Feb 19 10:29:05 crc kubenswrapper[4811]: I0219 10:29:05.792174 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cc5ca584-480d-4418-8c44-1642a4e13286" containerName="ceilometer-notification-agent" containerID="cri-o://a473faa60240a715dfb60657385ad24a5599a062686e0d08481fbd300e417d02" gracePeriod=30 Feb 19 10:29:05 crc kubenswrapper[4811]: I0219 10:29:05.792246 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cc5ca584-480d-4418-8c44-1642a4e13286" containerName="sg-core" containerID="cri-o://3be5c1b98c4fc123d6aa7b98e74277f582fd5e5c90fa2d0a1e43b0ad48c9656a" gracePeriod=30 Feb 19 10:29:05 crc kubenswrapper[4811]: I0219 10:29:05.804989 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5a784552-f4b7-42d3-a0b7-d223fbe3818c","Type":"ContainerStarted","Data":"e2ac5db0bb11f0034e5b9277779030659fe00b51f22e5ab838bf5aca12e344b3"} Feb 19 10:29:05 crc kubenswrapper[4811]: I0219 10:29:05.811141 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7","Type":"ContainerStarted","Data":"ff08f5d6f157d496883fb7175b638df5a73bdcad375531661312f6b8c04445f7"} Feb 19 10:29:05 crc kubenswrapper[4811]: I0219 10:29:05.811198 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7","Type":"ContainerStarted","Data":"bce0d5986a143e0c3f1e353f43f7b78a354fa3fa357b4a8a7e21a5b383e5a993"} Feb 19 10:29:05 crc kubenswrapper[4811]: I0219 10:29:05.850180 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.5571303690000002 podStartE2EDuration="14.850144401s" podCreationTimestamp="2026-02-19 10:28:51 +0000 UTC" firstStartedPulling="2026-02-19 10:28:52.608633469 +0000 UTC m=+1221.135098155" lastFinishedPulling="2026-02-19 10:29:04.901647501 +0000 UTC m=+1233.428112187" observedRunningTime="2026-02-19 10:29:05.839977341 +0000 UTC m=+1234.366442027" watchObservedRunningTime="2026-02-19 10:29:05.850144401 +0000 UTC m=+1234.376609087" Feb 19 10:29:06 crc kubenswrapper[4811]: I0219 10:29:06.742014 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="6c301fbe-f93a-4b37-bc7b-3f226b08a6ba" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.166:3000/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 10:29:06 crc kubenswrapper[4811]: I0219 10:29:06.828728 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5a784552-f4b7-42d3-a0b7-d223fbe3818c","Type":"ContainerStarted","Data":"f521e492778f4fac166219233255fbeecd077e534031bee7549f9d7aa9689341"} Feb 19 10:29:06 crc kubenswrapper[4811]: I0219 10:29:06.831303 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7","Type":"ContainerStarted","Data":"f11476296150fd5ecf3f7720a79e936c484a9884f0671fc752daecfe015b7d79"} Feb 19 10:29:06 crc kubenswrapper[4811]: I0219 10:29:06.838002 4811 generic.go:334] "Generic (PLEG): container finished" podID="cc5ca584-480d-4418-8c44-1642a4e13286" containerID="14ecd677175c83fb6fe18266cfad4ab360f780e242a41ef81f275a89ab78c55e" exitCode=0 Feb 19 10:29:06 crc kubenswrapper[4811]: I0219 10:29:06.838057 4811 generic.go:334] "Generic (PLEG): container finished" podID="cc5ca584-480d-4418-8c44-1642a4e13286" containerID="3be5c1b98c4fc123d6aa7b98e74277f582fd5e5c90fa2d0a1e43b0ad48c9656a" exitCode=2 Feb 19 10:29:06 crc kubenswrapper[4811]: I0219 10:29:06.838068 4811 generic.go:334] "Generic (PLEG): container finished" podID="cc5ca584-480d-4418-8c44-1642a4e13286" containerID="b47a75f1663170ff16642c61508edfbab8be412c4aacab0d20da336887ee09b4" exitCode=0 Feb 19 10:29:06 crc kubenswrapper[4811]: I0219 10:29:06.838108 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc5ca584-480d-4418-8c44-1642a4e13286","Type":"ContainerDied","Data":"14ecd677175c83fb6fe18266cfad4ab360f780e242a41ef81f275a89ab78c55e"} Feb 19 10:29:06 crc kubenswrapper[4811]: I0219 10:29:06.838155 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc5ca584-480d-4418-8c44-1642a4e13286","Type":"ContainerDied","Data":"3be5c1b98c4fc123d6aa7b98e74277f582fd5e5c90fa2d0a1e43b0ad48c9656a"} Feb 19 10:29:06 crc kubenswrapper[4811]: I0219 10:29:06.838169 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc5ca584-480d-4418-8c44-1642a4e13286","Type":"ContainerDied","Data":"b47a75f1663170ff16642c61508edfbab8be412c4aacab0d20da336887ee09b4"} Feb 19 10:29:06 crc kubenswrapper[4811]: I0219 10:29:06.864333 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.864304097 podStartE2EDuration="4.864304097s" podCreationTimestamp="2026-02-19 10:29:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:29:06.858193991 +0000 UTC m=+1235.384658697" watchObservedRunningTime="2026-02-19 10:29:06.864304097 +0000 UTC m=+1235.390768783" Feb 19 10:29:06 crc kubenswrapper[4811]: I0219 10:29:06.894635 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.894600881 podStartE2EDuration="3.894600881s" podCreationTimestamp="2026-02-19 10:29:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:29:06.88318747 +0000 UTC m=+1235.409652176" watchObservedRunningTime="2026-02-19 10:29:06.894600881 +0000 UTC m=+1235.421065557" Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.447314 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.591685 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc5ca584-480d-4418-8c44-1642a4e13286-combined-ca-bundle\") pod \"cc5ca584-480d-4418-8c44-1642a4e13286\" (UID: \"cc5ca584-480d-4418-8c44-1642a4e13286\") " Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.591816 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc5ca584-480d-4418-8c44-1642a4e13286-log-httpd\") pod \"cc5ca584-480d-4418-8c44-1642a4e13286\" (UID: \"cc5ca584-480d-4418-8c44-1642a4e13286\") " Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.591906 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltf9w\" (UniqueName: \"kubernetes.io/projected/cc5ca584-480d-4418-8c44-1642a4e13286-kube-api-access-ltf9w\") pod \"cc5ca584-480d-4418-8c44-1642a4e13286\" (UID: \"cc5ca584-480d-4418-8c44-1642a4e13286\") " Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.591972 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc5ca584-480d-4418-8c44-1642a4e13286-scripts\") pod \"cc5ca584-480d-4418-8c44-1642a4e13286\" (UID: \"cc5ca584-480d-4418-8c44-1642a4e13286\") " Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.592037 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc5ca584-480d-4418-8c44-1642a4e13286-run-httpd\") pod \"cc5ca584-480d-4418-8c44-1642a4e13286\" (UID: \"cc5ca584-480d-4418-8c44-1642a4e13286\") " Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.592219 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc5ca584-480d-4418-8c44-1642a4e13286-sg-core-conf-yaml\") pod \"cc5ca584-480d-4418-8c44-1642a4e13286\" (UID: \"cc5ca584-480d-4418-8c44-1642a4e13286\") " Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.592325 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc5ca584-480d-4418-8c44-1642a4e13286-ceilometer-tls-certs\") pod \"cc5ca584-480d-4418-8c44-1642a4e13286\" (UID: \"cc5ca584-480d-4418-8c44-1642a4e13286\") " Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.592367 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc5ca584-480d-4418-8c44-1642a4e13286-config-data\") pod \"cc5ca584-480d-4418-8c44-1642a4e13286\" (UID: \"cc5ca584-480d-4418-8c44-1642a4e13286\") " Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.592659 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc5ca584-480d-4418-8c44-1642a4e13286-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cc5ca584-480d-4418-8c44-1642a4e13286" (UID: "cc5ca584-480d-4418-8c44-1642a4e13286"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.593139 4811 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc5ca584-480d-4418-8c44-1642a4e13286-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.593476 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc5ca584-480d-4418-8c44-1642a4e13286-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cc5ca584-480d-4418-8c44-1642a4e13286" (UID: "cc5ca584-480d-4418-8c44-1642a4e13286"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.619293 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc5ca584-480d-4418-8c44-1642a4e13286-kube-api-access-ltf9w" (OuterVolumeSpecName: "kube-api-access-ltf9w") pod "cc5ca584-480d-4418-8c44-1642a4e13286" (UID: "cc5ca584-480d-4418-8c44-1642a4e13286"). InnerVolumeSpecName "kube-api-access-ltf9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.623387 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc5ca584-480d-4418-8c44-1642a4e13286-scripts" (OuterVolumeSpecName: "scripts") pod "cc5ca584-480d-4418-8c44-1642a4e13286" (UID: "cc5ca584-480d-4418-8c44-1642a4e13286"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.644577 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc5ca584-480d-4418-8c44-1642a4e13286-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cc5ca584-480d-4418-8c44-1642a4e13286" (UID: "cc5ca584-480d-4418-8c44-1642a4e13286"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.652318 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc5ca584-480d-4418-8c44-1642a4e13286-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "cc5ca584-480d-4418-8c44-1642a4e13286" (UID: "cc5ca584-480d-4418-8c44-1642a4e13286"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.673230 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-766dfcbd56-lqbrt" Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.697983 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltf9w\" (UniqueName: \"kubernetes.io/projected/cc5ca584-480d-4418-8c44-1642a4e13286-kube-api-access-ltf9w\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.698041 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc5ca584-480d-4418-8c44-1642a4e13286-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.698056 4811 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc5ca584-480d-4418-8c44-1642a4e13286-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.698084 4811 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc5ca584-480d-4418-8c44-1642a4e13286-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.698096 4811 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc5ca584-480d-4418-8c44-1642a4e13286-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.709074 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc5ca584-480d-4418-8c44-1642a4e13286-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc5ca584-480d-4418-8c44-1642a4e13286" (UID: "cc5ca584-480d-4418-8c44-1642a4e13286"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.754774 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc5ca584-480d-4418-8c44-1642a4e13286-config-data" (OuterVolumeSpecName: "config-data") pod "cc5ca584-480d-4418-8c44-1642a4e13286" (UID: "cc5ca584-480d-4418-8c44-1642a4e13286"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.799388 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc5ca584-480d-4418-8c44-1642a4e13286-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.799427 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc5ca584-480d-4418-8c44-1642a4e13286-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.852480 4811 generic.go:334] "Generic (PLEG): container finished" podID="cc5ca584-480d-4418-8c44-1642a4e13286" containerID="a473faa60240a715dfb60657385ad24a5599a062686e0d08481fbd300e417d02" exitCode=0 Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.852579 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc5ca584-480d-4418-8c44-1642a4e13286","Type":"ContainerDied","Data":"a473faa60240a715dfb60657385ad24a5599a062686e0d08481fbd300e417d02"} Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.852614 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.852687 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc5ca584-480d-4418-8c44-1642a4e13286","Type":"ContainerDied","Data":"d0d9bf67d5055691b21c703a33c0faa041e356c86a8c00ab8c4a123cde9bca34"} Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.852725 4811 scope.go:117] "RemoveContainer" containerID="14ecd677175c83fb6fe18266cfad4ab360f780e242a41ef81f275a89ab78c55e" Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.882342 4811 scope.go:117] "RemoveContainer" containerID="3be5c1b98c4fc123d6aa7b98e74277f582fd5e5c90fa2d0a1e43b0ad48c9656a" Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.911880 4811 scope.go:117] "RemoveContainer" containerID="a473faa60240a715dfb60657385ad24a5599a062686e0d08481fbd300e417d02" Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.919757 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.933399 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.951673 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:29:07 crc kubenswrapper[4811]: E0219 10:29:07.952409 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc5ca584-480d-4418-8c44-1642a4e13286" containerName="proxy-httpd" Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.952439 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc5ca584-480d-4418-8c44-1642a4e13286" containerName="proxy-httpd" Feb 19 10:29:07 crc kubenswrapper[4811]: E0219 10:29:07.952512 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc5ca584-480d-4418-8c44-1642a4e13286" containerName="ceilometer-central-agent" Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.952524 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc5ca584-480d-4418-8c44-1642a4e13286" containerName="ceilometer-central-agent" Feb 19 10:29:07 crc kubenswrapper[4811]: E0219 10:29:07.952563 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc5ca584-480d-4418-8c44-1642a4e13286" containerName="ceilometer-notification-agent" Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.952574 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc5ca584-480d-4418-8c44-1642a4e13286" containerName="ceilometer-notification-agent" Feb 19 10:29:07 crc kubenswrapper[4811]: E0219 10:29:07.952588 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc5ca584-480d-4418-8c44-1642a4e13286" containerName="sg-core" Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.952597 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc5ca584-480d-4418-8c44-1642a4e13286" containerName="sg-core" Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.952890 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc5ca584-480d-4418-8c44-1642a4e13286" containerName="sg-core" Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.952931 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc5ca584-480d-4418-8c44-1642a4e13286" containerName="proxy-httpd" Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.952949 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc5ca584-480d-4418-8c44-1642a4e13286" containerName="ceilometer-notification-agent" Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.952960 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc5ca584-480d-4418-8c44-1642a4e13286" containerName="ceilometer-central-agent" Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.955578 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.962886 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.963390 4811 scope.go:117] "RemoveContainer" containerID="b47a75f1663170ff16642c61508edfbab8be412c4aacab0d20da336887ee09b4" Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.964015 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.964194 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 19 10:29:07 crc kubenswrapper[4811]: I0219 10:29:07.964502 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 10:29:08 crc kubenswrapper[4811]: I0219 10:29:08.001229 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-676dcb6496-tnhmt" Feb 19 10:29:08 crc kubenswrapper[4811]: I0219 10:29:08.009442 4811 scope.go:117] "RemoveContainer" containerID="14ecd677175c83fb6fe18266cfad4ab360f780e242a41ef81f275a89ab78c55e" Feb 19 10:29:08 crc kubenswrapper[4811]: E0219 10:29:08.010552 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14ecd677175c83fb6fe18266cfad4ab360f780e242a41ef81f275a89ab78c55e\": container with ID starting with 14ecd677175c83fb6fe18266cfad4ab360f780e242a41ef81f275a89ab78c55e not found: ID does not exist" containerID="14ecd677175c83fb6fe18266cfad4ab360f780e242a41ef81f275a89ab78c55e" Feb 19 10:29:08 crc kubenswrapper[4811]: I0219 10:29:08.010607 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14ecd677175c83fb6fe18266cfad4ab360f780e242a41ef81f275a89ab78c55e"} err="failed to get container status \"14ecd677175c83fb6fe18266cfad4ab360f780e242a41ef81f275a89ab78c55e\": rpc error: code = NotFound desc = could not find container \"14ecd677175c83fb6fe18266cfad4ab360f780e242a41ef81f275a89ab78c55e\": container with ID starting with 14ecd677175c83fb6fe18266cfad4ab360f780e242a41ef81f275a89ab78c55e not found: ID does not exist" Feb 19 10:29:08 crc kubenswrapper[4811]: I0219 10:29:08.010638 4811 scope.go:117] "RemoveContainer" containerID="3be5c1b98c4fc123d6aa7b98e74277f582fd5e5c90fa2d0a1e43b0ad48c9656a" Feb 19 10:29:08 crc kubenswrapper[4811]: E0219 10:29:08.010997 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3be5c1b98c4fc123d6aa7b98e74277f582fd5e5c90fa2d0a1e43b0ad48c9656a\": container with ID starting with 3be5c1b98c4fc123d6aa7b98e74277f582fd5e5c90fa2d0a1e43b0ad48c9656a not found: ID does not exist" containerID="3be5c1b98c4fc123d6aa7b98e74277f582fd5e5c90fa2d0a1e43b0ad48c9656a" Feb 19 10:29:08 crc kubenswrapper[4811]: I0219 10:29:08.011026 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3be5c1b98c4fc123d6aa7b98e74277f582fd5e5c90fa2d0a1e43b0ad48c9656a"} err="failed to get container status \"3be5c1b98c4fc123d6aa7b98e74277f582fd5e5c90fa2d0a1e43b0ad48c9656a\": rpc error: code = NotFound desc = could not find container \"3be5c1b98c4fc123d6aa7b98e74277f582fd5e5c90fa2d0a1e43b0ad48c9656a\": container with ID starting with 3be5c1b98c4fc123d6aa7b98e74277f582fd5e5c90fa2d0a1e43b0ad48c9656a not found: ID does not exist" Feb 19 10:29:08 crc kubenswrapper[4811]: I0219 10:29:08.011042 4811 scope.go:117] "RemoveContainer" containerID="a473faa60240a715dfb60657385ad24a5599a062686e0d08481fbd300e417d02" Feb 19 10:29:08 crc kubenswrapper[4811]: E0219 10:29:08.011291 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a473faa60240a715dfb60657385ad24a5599a062686e0d08481fbd300e417d02\": container with ID starting with a473faa60240a715dfb60657385ad24a5599a062686e0d08481fbd300e417d02 not found: ID does not exist" containerID="a473faa60240a715dfb60657385ad24a5599a062686e0d08481fbd300e417d02" Feb 19 10:29:08 crc kubenswrapper[4811]: I0219 10:29:08.011317 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a473faa60240a715dfb60657385ad24a5599a062686e0d08481fbd300e417d02"} err="failed to get container status \"a473faa60240a715dfb60657385ad24a5599a062686e0d08481fbd300e417d02\": rpc error: code = NotFound desc = could not find container \"a473faa60240a715dfb60657385ad24a5599a062686e0d08481fbd300e417d02\": container with ID starting with a473faa60240a715dfb60657385ad24a5599a062686e0d08481fbd300e417d02 not found: ID does not exist" Feb 19 10:29:08 crc kubenswrapper[4811]: I0219 10:29:08.011333 4811 scope.go:117] "RemoveContainer" containerID="b47a75f1663170ff16642c61508edfbab8be412c4aacab0d20da336887ee09b4" Feb 19 10:29:08 crc kubenswrapper[4811]: E0219 10:29:08.011569 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b47a75f1663170ff16642c61508edfbab8be412c4aacab0d20da336887ee09b4\": container with ID starting with b47a75f1663170ff16642c61508edfbab8be412c4aacab0d20da336887ee09b4 not found: ID does not exist" containerID="b47a75f1663170ff16642c61508edfbab8be412c4aacab0d20da336887ee09b4" Feb 19 10:29:08 crc kubenswrapper[4811]: I0219 10:29:08.011600 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b47a75f1663170ff16642c61508edfbab8be412c4aacab0d20da336887ee09b4"} err="failed to get container status \"b47a75f1663170ff16642c61508edfbab8be412c4aacab0d20da336887ee09b4\": rpc error: code = NotFound desc = could not find container \"b47a75f1663170ff16642c61508edfbab8be412c4aacab0d20da336887ee09b4\": container with ID starting with b47a75f1663170ff16642c61508edfbab8be412c4aacab0d20da336887ee09b4 not found: ID does not exist" Feb 19 10:29:08 crc kubenswrapper[4811]: I0219 10:29:08.111041 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e35360f-5379-40e3-91ce-9059ba0dc61c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1e35360f-5379-40e3-91ce-9059ba0dc61c\") " pod="openstack/ceilometer-0" Feb 19 10:29:08 crc kubenswrapper[4811]: I0219 10:29:08.111117 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e35360f-5379-40e3-91ce-9059ba0dc61c-scripts\") pod \"ceilometer-0\" (UID: \"1e35360f-5379-40e3-91ce-9059ba0dc61c\") " pod="openstack/ceilometer-0" Feb 19 10:29:08 crc kubenswrapper[4811]: I0219 10:29:08.111149 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e35360f-5379-40e3-91ce-9059ba0dc61c-run-httpd\") pod \"ceilometer-0\" (UID: \"1e35360f-5379-40e3-91ce-9059ba0dc61c\") " pod="openstack/ceilometer-0" Feb 19 10:29:08 crc kubenswrapper[4811]: I0219 10:29:08.111921 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2hg8\" (UniqueName: \"kubernetes.io/projected/1e35360f-5379-40e3-91ce-9059ba0dc61c-kube-api-access-p2hg8\") pod \"ceilometer-0\" (UID: \"1e35360f-5379-40e3-91ce-9059ba0dc61c\") " pod="openstack/ceilometer-0" Feb 19 10:29:08 crc kubenswrapper[4811]: I0219 10:29:08.112177 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e35360f-5379-40e3-91ce-9059ba0dc61c-config-data\") pod \"ceilometer-0\" (UID: \"1e35360f-5379-40e3-91ce-9059ba0dc61c\") " pod="openstack/ceilometer-0" Feb 19 10:29:08 crc kubenswrapper[4811]: I0219 10:29:08.112571 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e35360f-5379-40e3-91ce-9059ba0dc61c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1e35360f-5379-40e3-91ce-9059ba0dc61c\") " pod="openstack/ceilometer-0" Feb 19 10:29:08 crc kubenswrapper[4811]: I0219 10:29:08.112611 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e35360f-5379-40e3-91ce-9059ba0dc61c-log-httpd\") pod \"ceilometer-0\" (UID: \"1e35360f-5379-40e3-91ce-9059ba0dc61c\") " pod="openstack/ceilometer-0" Feb 19 10:29:08 crc kubenswrapper[4811]: I0219 10:29:08.112694 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e35360f-5379-40e3-91ce-9059ba0dc61c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1e35360f-5379-40e3-91ce-9059ba0dc61c\") " pod="openstack/ceilometer-0" Feb 19 10:29:08 crc kubenswrapper[4811]: I0219 10:29:08.214349 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e35360f-5379-40e3-91ce-9059ba0dc61c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1e35360f-5379-40e3-91ce-9059ba0dc61c\") " pod="openstack/ceilometer-0" Feb 19 10:29:08 crc kubenswrapper[4811]: I0219 10:29:08.214401 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e35360f-5379-40e3-91ce-9059ba0dc61c-scripts\") pod \"ceilometer-0\" (UID: \"1e35360f-5379-40e3-91ce-9059ba0dc61c\") " pod="openstack/ceilometer-0" Feb 19 10:29:08 crc kubenswrapper[4811]: I0219 10:29:08.214426 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e35360f-5379-40e3-91ce-9059ba0dc61c-run-httpd\") pod \"ceilometer-0\" (UID: \"1e35360f-5379-40e3-91ce-9059ba0dc61c\") " pod="openstack/ceilometer-0" Feb 19 10:29:08 crc kubenswrapper[4811]: I0219 10:29:08.214487 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2hg8\" (UniqueName: \"kubernetes.io/projected/1e35360f-5379-40e3-91ce-9059ba0dc61c-kube-api-access-p2hg8\") pod \"ceilometer-0\" (UID: \"1e35360f-5379-40e3-91ce-9059ba0dc61c\") " pod="openstack/ceilometer-0" Feb 19 10:29:08 crc kubenswrapper[4811]: I0219 10:29:08.214518 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e35360f-5379-40e3-91ce-9059ba0dc61c-config-data\") pod \"ceilometer-0\" (UID: \"1e35360f-5379-40e3-91ce-9059ba0dc61c\") " pod="openstack/ceilometer-0" Feb 19 10:29:08 crc kubenswrapper[4811]: I0219 10:29:08.214568 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e35360f-5379-40e3-91ce-9059ba0dc61c-log-httpd\") pod \"ceilometer-0\" (UID: \"1e35360f-5379-40e3-91ce-9059ba0dc61c\") " pod="openstack/ceilometer-0" Feb 19 10:29:08 crc kubenswrapper[4811]: I0219 10:29:08.214585 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e35360f-5379-40e3-91ce-9059ba0dc61c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1e35360f-5379-40e3-91ce-9059ba0dc61c\") " pod="openstack/ceilometer-0" Feb 19 10:29:08 crc kubenswrapper[4811]: I0219 10:29:08.214609 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e35360f-5379-40e3-91ce-9059ba0dc61c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1e35360f-5379-40e3-91ce-9059ba0dc61c\") " pod="openstack/ceilometer-0" Feb 19 10:29:08 crc kubenswrapper[4811]: I0219 10:29:08.215129 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e35360f-5379-40e3-91ce-9059ba0dc61c-run-httpd\") pod \"ceilometer-0\" (UID: \"1e35360f-5379-40e3-91ce-9059ba0dc61c\") " pod="openstack/ceilometer-0" Feb 19 10:29:08 crc kubenswrapper[4811]: I0219 10:29:08.215306 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e35360f-5379-40e3-91ce-9059ba0dc61c-log-httpd\") pod \"ceilometer-0\" (UID: \"1e35360f-5379-40e3-91ce-9059ba0dc61c\") " pod="openstack/ceilometer-0" Feb 19 10:29:08 crc kubenswrapper[4811]: I0219 10:29:08.219555 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e35360f-5379-40e3-91ce-9059ba0dc61c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1e35360f-5379-40e3-91ce-9059ba0dc61c\") " pod="openstack/ceilometer-0" Feb 19 10:29:08 crc kubenswrapper[4811]: I0219 10:29:08.220331 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e35360f-5379-40e3-91ce-9059ba0dc61c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1e35360f-5379-40e3-91ce-9059ba0dc61c\") " pod="openstack/ceilometer-0" Feb 19 10:29:08 crc kubenswrapper[4811]: I0219 10:29:08.220374 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e35360f-5379-40e3-91ce-9059ba0dc61c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1e35360f-5379-40e3-91ce-9059ba0dc61c\") " pod="openstack/ceilometer-0" Feb 19 10:29:08 crc kubenswrapper[4811]: I0219 10:29:08.221283 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e35360f-5379-40e3-91ce-9059ba0dc61c-config-data\") pod \"ceilometer-0\" (UID: \"1e35360f-5379-40e3-91ce-9059ba0dc61c\") " pod="openstack/ceilometer-0" Feb 19 10:29:08 crc kubenswrapper[4811]: I0219 10:29:08.222377 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e35360f-5379-40e3-91ce-9059ba0dc61c-scripts\") pod \"ceilometer-0\" (UID: \"1e35360f-5379-40e3-91ce-9059ba0dc61c\") " pod="openstack/ceilometer-0" Feb 19 10:29:08 crc kubenswrapper[4811]: I0219 10:29:08.236260 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2hg8\" (UniqueName: \"kubernetes.io/projected/1e35360f-5379-40e3-91ce-9059ba0dc61c-kube-api-access-p2hg8\") pod \"ceilometer-0\" (UID: \"1e35360f-5379-40e3-91ce-9059ba0dc61c\") " pod="openstack/ceilometer-0" Feb 19 10:29:08 crc kubenswrapper[4811]: I0219 10:29:08.288720 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:29:08 crc kubenswrapper[4811]: I0219 10:29:08.597124 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc5ca584-480d-4418-8c44-1642a4e13286" path="/var/lib/kubelet/pods/cc5ca584-480d-4418-8c44-1642a4e13286/volumes" Feb 19 10:29:08 crc kubenswrapper[4811]: W0219 10:29:08.774309 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e35360f_5379_40e3_91ce_9059ba0dc61c.slice/crio-533fffe7b5dc91c42b7dfab88f615dcfb3504ef1fd9397335d76ec5b6d210fc3 WatchSource:0}: Error finding container 533fffe7b5dc91c42b7dfab88f615dcfb3504ef1fd9397335d76ec5b6d210fc3: Status 404 returned error can't find the container with id 533fffe7b5dc91c42b7dfab88f615dcfb3504ef1fd9397335d76ec5b6d210fc3 Feb 19 10:29:08 crc kubenswrapper[4811]: I0219 10:29:08.785108 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:29:08 crc kubenswrapper[4811]: I0219 10:29:08.883629 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e35360f-5379-40e3-91ce-9059ba0dc61c","Type":"ContainerStarted","Data":"533fffe7b5dc91c42b7dfab88f615dcfb3504ef1fd9397335d76ec5b6d210fc3"} Feb 19 10:29:09 crc kubenswrapper[4811]: I0219 10:29:09.781057 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-766dfcbd56-lqbrt" Feb 19 10:29:09 crc kubenswrapper[4811]: I0219 10:29:09.914332 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e35360f-5379-40e3-91ce-9059ba0dc61c","Type":"ContainerStarted","Data":"ad74386164c171b1c59842b6cbb67f4d15204589ba4731c30b9bbe041a9d4b77"} Feb 19 10:29:10 crc kubenswrapper[4811]: I0219 10:29:10.184086 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-676dcb6496-tnhmt" Feb 19 10:29:10 crc kubenswrapper[4811]: I0219 10:29:10.280484 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-766dfcbd56-lqbrt"] Feb 19 10:29:10 crc kubenswrapper[4811]: I0219 10:29:10.280760 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-766dfcbd56-lqbrt" podUID="4ab45e9b-6cfc-48a3-b603-f47d47bbd510" containerName="horizon-log" containerID="cri-o://73cc025a1567957d2afc71121863816e1f341f3be74bf2a4abb8ce0ce5ccf057" gracePeriod=30 Feb 19 10:29:10 crc kubenswrapper[4811]: I0219 10:29:10.280948 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-766dfcbd56-lqbrt" podUID="4ab45e9b-6cfc-48a3-b603-f47d47bbd510" containerName="horizon" containerID="cri-o://bfc48b984c2ea255b68016b29a536c04abbfce299b7e71e4dc30275cf1b30bdd" gracePeriod=30 Feb 19 10:29:11 crc kubenswrapper[4811]: I0219 10:29:11.938306 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e35360f-5379-40e3-91ce-9059ba0dc61c","Type":"ContainerStarted","Data":"13723dbaee57e9b82dd1e2d51fb0be3b2e4292b82db222eff45eb548bca41e95"} Feb 19 10:29:11 crc kubenswrapper[4811]: I0219 10:29:11.938859 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e35360f-5379-40e3-91ce-9059ba0dc61c","Type":"ContainerStarted","Data":"3170af1924d6d7ce2e07904502348a4fb4972889f8209a02614e12890a90b19d"} Feb 19 10:29:13 crc kubenswrapper[4811]: I0219 10:29:13.103620 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 10:29:13 crc kubenswrapper[4811]: I0219 10:29:13.104176 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 10:29:13 crc kubenswrapper[4811]: I0219 10:29:13.138545 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 10:29:13 crc kubenswrapper[4811]: I0219 10:29:13.150653 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 10:29:13 crc kubenswrapper[4811]: I0219 10:29:13.968011 4811 generic.go:334] "Generic (PLEG): container finished" podID="4ab45e9b-6cfc-48a3-b603-f47d47bbd510" containerID="bfc48b984c2ea255b68016b29a536c04abbfce299b7e71e4dc30275cf1b30bdd" exitCode=0 Feb 19 10:29:13 crc kubenswrapper[4811]: I0219 10:29:13.968090 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-766dfcbd56-lqbrt" event={"ID":"4ab45e9b-6cfc-48a3-b603-f47d47bbd510","Type":"ContainerDied","Data":"bfc48b984c2ea255b68016b29a536c04abbfce299b7e71e4dc30275cf1b30bdd"} Feb 19 10:29:13 crc kubenswrapper[4811]: I0219 10:29:13.968723 4811 scope.go:117] "RemoveContainer" containerID="e88cbd6cffe02dfbcfdeed3f6b27917bd198051bdb2fc9298ecc4f011b2d45af" Feb 19 10:29:13 crc kubenswrapper[4811]: I0219 10:29:13.973485 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e35360f-5379-40e3-91ce-9059ba0dc61c","Type":"ContainerStarted","Data":"fe8c77bab3b8e332702c97361f6aeb62005575bd90fbb5ff9ee7ec98038c2ef9"} Feb 19 10:29:13 crc kubenswrapper[4811]: I0219 10:29:13.973769 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 10:29:13 crc kubenswrapper[4811]: I0219 10:29:13.973999 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 10:29:13 crc kubenswrapper[4811]: I0219 10:29:13.974369 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 10:29:14 crc kubenswrapper[4811]: I0219 10:29:14.004970 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.439369227 podStartE2EDuration="7.004946914s" podCreationTimestamp="2026-02-19 10:29:07 +0000 UTC" firstStartedPulling="2026-02-19 10:29:08.778230988 +0000 UTC m=+1237.304695674" lastFinishedPulling="2026-02-19 10:29:13.343808675 +0000 UTC m=+1241.870273361" observedRunningTime="2026-02-19 10:29:14.001997458 +0000 UTC m=+1242.528462154" watchObservedRunningTime="2026-02-19 10:29:14.004946914 +0000 UTC m=+1242.531411610" Feb 19 10:29:14 crc kubenswrapper[4811]: I0219 10:29:14.159220 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 10:29:14 crc kubenswrapper[4811]: I0219 10:29:14.159303 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 10:29:14 crc kubenswrapper[4811]: I0219 10:29:14.191652 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 10:29:14 crc kubenswrapper[4811]: I0219 10:29:14.207561 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 10:29:14 crc kubenswrapper[4811]: I0219 10:29:14.985789 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 10:29:14 crc kubenswrapper[4811]: I0219 10:29:14.985840 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 10:29:15 crc kubenswrapper[4811]: I0219 10:29:15.409022 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-766dfcbd56-lqbrt" podUID="4ab45e9b-6cfc-48a3-b603-f47d47bbd510" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.144:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.144:8443: connect: connection refused" Feb 19 10:29:15 crc kubenswrapper[4811]: I0219 10:29:15.981398 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 10:29:15 crc kubenswrapper[4811]: I0219 10:29:15.982366 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 10:29:16 crc kubenswrapper[4811]: I0219 10:29:16.000003 4811 generic.go:334] "Generic (PLEG): container finished" podID="28527c23-663e-467f-b7df-90e96a677700" containerID="272b02e6235c05038091dc13a6a228f3743e79220b6e507e629029169e307b73" exitCode=0 Feb 19 10:29:16 crc kubenswrapper[4811]: I0219 10:29:16.000102 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-j6nxh" event={"ID":"28527c23-663e-467f-b7df-90e96a677700","Type":"ContainerDied","Data":"272b02e6235c05038091dc13a6a228f3743e79220b6e507e629029169e307b73"} Feb 19 10:29:17 crc kubenswrapper[4811]: I0219 10:29:17.153319 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 10:29:17 crc kubenswrapper[4811]: I0219 10:29:17.153992 4811 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 10:29:17 crc kubenswrapper[4811]: I0219 10:29:17.567008 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-j6nxh" Feb 19 10:29:17 crc kubenswrapper[4811]: I0219 10:29:17.575143 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 10:29:17 crc kubenswrapper[4811]: I0219 10:29:17.733762 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28527c23-663e-467f-b7df-90e96a677700-config-data\") pod \"28527c23-663e-467f-b7df-90e96a677700\" (UID: \"28527c23-663e-467f-b7df-90e96a677700\") " Feb 19 10:29:17 crc kubenswrapper[4811]: I0219 10:29:17.733964 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28527c23-663e-467f-b7df-90e96a677700-scripts\") pod \"28527c23-663e-467f-b7df-90e96a677700\" (UID: \"28527c23-663e-467f-b7df-90e96a677700\") " Feb 19 10:29:17 crc kubenswrapper[4811]: I0219 10:29:17.734064 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28527c23-663e-467f-b7df-90e96a677700-combined-ca-bundle\") pod \"28527c23-663e-467f-b7df-90e96a677700\" (UID: \"28527c23-663e-467f-b7df-90e96a677700\") " Feb 19 10:29:17 crc kubenswrapper[4811]: I0219 10:29:17.734284 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dcqg\" (UniqueName: \"kubernetes.io/projected/28527c23-663e-467f-b7df-90e96a677700-kube-api-access-8dcqg\") pod \"28527c23-663e-467f-b7df-90e96a677700\" (UID: \"28527c23-663e-467f-b7df-90e96a677700\") " Feb 19 10:29:17 crc kubenswrapper[4811]: I0219 10:29:17.744652 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28527c23-663e-467f-b7df-90e96a677700-scripts" (OuterVolumeSpecName: "scripts") pod "28527c23-663e-467f-b7df-90e96a677700" (UID: "28527c23-663e-467f-b7df-90e96a677700"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:17 crc kubenswrapper[4811]: I0219 10:29:17.746633 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28527c23-663e-467f-b7df-90e96a677700-kube-api-access-8dcqg" (OuterVolumeSpecName: "kube-api-access-8dcqg") pod "28527c23-663e-467f-b7df-90e96a677700" (UID: "28527c23-663e-467f-b7df-90e96a677700"). InnerVolumeSpecName "kube-api-access-8dcqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:29:17 crc kubenswrapper[4811]: I0219 10:29:17.791336 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28527c23-663e-467f-b7df-90e96a677700-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28527c23-663e-467f-b7df-90e96a677700" (UID: "28527c23-663e-467f-b7df-90e96a677700"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:17 crc kubenswrapper[4811]: I0219 10:29:17.791805 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28527c23-663e-467f-b7df-90e96a677700-config-data" (OuterVolumeSpecName: "config-data") pod "28527c23-663e-467f-b7df-90e96a677700" (UID: "28527c23-663e-467f-b7df-90e96a677700"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:17 crc kubenswrapper[4811]: I0219 10:29:17.836153 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dcqg\" (UniqueName: \"kubernetes.io/projected/28527c23-663e-467f-b7df-90e96a677700-kube-api-access-8dcqg\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:17 crc kubenswrapper[4811]: I0219 10:29:17.836205 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28527c23-663e-467f-b7df-90e96a677700-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:17 crc kubenswrapper[4811]: I0219 10:29:17.836218 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28527c23-663e-467f-b7df-90e96a677700-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:17 crc kubenswrapper[4811]: I0219 10:29:17.836230 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28527c23-663e-467f-b7df-90e96a677700-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:18 crc kubenswrapper[4811]: I0219 10:29:18.019716 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-j6nxh" Feb 19 10:29:18 crc kubenswrapper[4811]: I0219 10:29:18.025517 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-j6nxh" event={"ID":"28527c23-663e-467f-b7df-90e96a677700","Type":"ContainerDied","Data":"68752b5102adfa64d045c97e8f2a1a2e6c81f1c9eab1d3f5efa151435560df55"} Feb 19 10:29:18 crc kubenswrapper[4811]: I0219 10:29:18.025558 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68752b5102adfa64d045c97e8f2a1a2e6c81f1c9eab1d3f5efa151435560df55" Feb 19 10:29:18 crc kubenswrapper[4811]: I0219 10:29:18.210675 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 10:29:18 crc kubenswrapper[4811]: E0219 10:29:18.211150 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28527c23-663e-467f-b7df-90e96a677700" containerName="nova-cell0-conductor-db-sync" Feb 19 10:29:18 crc kubenswrapper[4811]: I0219 10:29:18.211166 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="28527c23-663e-467f-b7df-90e96a677700" containerName="nova-cell0-conductor-db-sync" Feb 19 10:29:18 crc kubenswrapper[4811]: I0219 10:29:18.211343 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="28527c23-663e-467f-b7df-90e96a677700" containerName="nova-cell0-conductor-db-sync" Feb 19 10:29:18 crc kubenswrapper[4811]: I0219 10:29:18.214733 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 10:29:18 crc kubenswrapper[4811]: I0219 10:29:18.222125 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 19 10:29:18 crc kubenswrapper[4811]: I0219 10:29:18.222714 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-f2rvf" Feb 19 10:29:18 crc kubenswrapper[4811]: I0219 10:29:18.239261 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 10:29:18 crc kubenswrapper[4811]: I0219 10:29:18.350250 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b92a06d2-fab4-462e-ae54-2dd7087ad94a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b92a06d2-fab4-462e-ae54-2dd7087ad94a\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:29:18 crc kubenswrapper[4811]: I0219 10:29:18.350299 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b92a06d2-fab4-462e-ae54-2dd7087ad94a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b92a06d2-fab4-462e-ae54-2dd7087ad94a\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:29:18 crc kubenswrapper[4811]: I0219 10:29:18.350361 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwgjv\" (UniqueName: \"kubernetes.io/projected/b92a06d2-fab4-462e-ae54-2dd7087ad94a-kube-api-access-nwgjv\") pod \"nova-cell0-conductor-0\" (UID: \"b92a06d2-fab4-462e-ae54-2dd7087ad94a\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:29:18 crc kubenswrapper[4811]: I0219 10:29:18.452381 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b92a06d2-fab4-462e-ae54-2dd7087ad94a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b92a06d2-fab4-462e-ae54-2dd7087ad94a\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:29:18 crc kubenswrapper[4811]: I0219 10:29:18.453234 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b92a06d2-fab4-462e-ae54-2dd7087ad94a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b92a06d2-fab4-462e-ae54-2dd7087ad94a\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:29:18 crc kubenswrapper[4811]: I0219 10:29:18.453384 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwgjv\" (UniqueName: \"kubernetes.io/projected/b92a06d2-fab4-462e-ae54-2dd7087ad94a-kube-api-access-nwgjv\") pod \"nova-cell0-conductor-0\" (UID: \"b92a06d2-fab4-462e-ae54-2dd7087ad94a\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:29:18 crc kubenswrapper[4811]: I0219 10:29:18.457675 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b92a06d2-fab4-462e-ae54-2dd7087ad94a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b92a06d2-fab4-462e-ae54-2dd7087ad94a\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:29:18 crc kubenswrapper[4811]: I0219 10:29:18.457982 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b92a06d2-fab4-462e-ae54-2dd7087ad94a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b92a06d2-fab4-462e-ae54-2dd7087ad94a\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:29:18 crc kubenswrapper[4811]: I0219 10:29:18.474744 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwgjv\" (UniqueName: \"kubernetes.io/projected/b92a06d2-fab4-462e-ae54-2dd7087ad94a-kube-api-access-nwgjv\") pod \"nova-cell0-conductor-0\" (UID: \"b92a06d2-fab4-462e-ae54-2dd7087ad94a\") " pod="openstack/nova-cell0-conductor-0" Feb 19 10:29:18 crc kubenswrapper[4811]: I0219 10:29:18.539365 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 10:29:19 crc kubenswrapper[4811]: I0219 10:29:19.047622 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 10:29:20 crc kubenswrapper[4811]: I0219 10:29:20.037630 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b92a06d2-fab4-462e-ae54-2dd7087ad94a","Type":"ContainerStarted","Data":"a6ec92b791c80aa42a057d7fb07ee87d0930e341c76d17f7568e13dbc3b97388"} Feb 19 10:29:20 crc kubenswrapper[4811]: I0219 10:29:20.038021 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b92a06d2-fab4-462e-ae54-2dd7087ad94a","Type":"ContainerStarted","Data":"d17e813d518063af88821a12bcc9cf1e8167f2cb9e20c4dde2fe2f195e9e8a89"} Feb 19 10:29:20 crc kubenswrapper[4811]: I0219 10:29:20.038039 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 19 10:29:20 crc kubenswrapper[4811]: I0219 10:29:20.058877 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.058855149 podStartE2EDuration="2.058855149s" podCreationTimestamp="2026-02-19 10:29:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:29:20.054161179 +0000 UTC m=+1248.580625885" watchObservedRunningTime="2026-02-19 10:29:20.058855149 +0000 UTC m=+1248.585319835" Feb 19 10:29:20 crc kubenswrapper[4811]: I0219 10:29:20.085168 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:29:20 crc kubenswrapper[4811]: I0219 10:29:20.085542 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1e35360f-5379-40e3-91ce-9059ba0dc61c" containerName="proxy-httpd" containerID="cri-o://fe8c77bab3b8e332702c97361f6aeb62005575bd90fbb5ff9ee7ec98038c2ef9" gracePeriod=30 Feb 19 10:29:20 crc kubenswrapper[4811]: I0219 10:29:20.085556 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1e35360f-5379-40e3-91ce-9059ba0dc61c" containerName="sg-core" containerID="cri-o://13723dbaee57e9b82dd1e2d51fb0be3b2e4292b82db222eff45eb548bca41e95" gracePeriod=30 Feb 19 10:29:20 crc kubenswrapper[4811]: I0219 10:29:20.085596 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1e35360f-5379-40e3-91ce-9059ba0dc61c" containerName="ceilometer-notification-agent" containerID="cri-o://3170af1924d6d7ce2e07904502348a4fb4972889f8209a02614e12890a90b19d" gracePeriod=30 Feb 19 10:29:20 crc kubenswrapper[4811]: I0219 10:29:20.085678 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1e35360f-5379-40e3-91ce-9059ba0dc61c" containerName="ceilometer-central-agent" containerID="cri-o://ad74386164c171b1c59842b6cbb67f4d15204589ba4731c30b9bbe041a9d4b77" gracePeriod=30 Feb 19 10:29:20 crc kubenswrapper[4811]: I0219 10:29:20.968287 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.067241 4811 generic.go:334] "Generic (PLEG): container finished" podID="1e35360f-5379-40e3-91ce-9059ba0dc61c" containerID="fe8c77bab3b8e332702c97361f6aeb62005575bd90fbb5ff9ee7ec98038c2ef9" exitCode=0 Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.067280 4811 generic.go:334] "Generic (PLEG): container finished" podID="1e35360f-5379-40e3-91ce-9059ba0dc61c" containerID="13723dbaee57e9b82dd1e2d51fb0be3b2e4292b82db222eff45eb548bca41e95" exitCode=2 Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.067291 4811 generic.go:334] "Generic (PLEG): container finished" podID="1e35360f-5379-40e3-91ce-9059ba0dc61c" containerID="3170af1924d6d7ce2e07904502348a4fb4972889f8209a02614e12890a90b19d" exitCode=0 Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.067300 4811 generic.go:334] "Generic (PLEG): container finished" podID="1e35360f-5379-40e3-91ce-9059ba0dc61c" containerID="ad74386164c171b1c59842b6cbb67f4d15204589ba4731c30b9bbe041a9d4b77" exitCode=0 Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.067350 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e35360f-5379-40e3-91ce-9059ba0dc61c","Type":"ContainerDied","Data":"fe8c77bab3b8e332702c97361f6aeb62005575bd90fbb5ff9ee7ec98038c2ef9"} Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.067417 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e35360f-5379-40e3-91ce-9059ba0dc61c","Type":"ContainerDied","Data":"13723dbaee57e9b82dd1e2d51fb0be3b2e4292b82db222eff45eb548bca41e95"} Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.067362 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.067464 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e35360f-5379-40e3-91ce-9059ba0dc61c","Type":"ContainerDied","Data":"3170af1924d6d7ce2e07904502348a4fb4972889f8209a02614e12890a90b19d"} Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.067655 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e35360f-5379-40e3-91ce-9059ba0dc61c","Type":"ContainerDied","Data":"ad74386164c171b1c59842b6cbb67f4d15204589ba4731c30b9bbe041a9d4b77"} Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.067668 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e35360f-5379-40e3-91ce-9059ba0dc61c","Type":"ContainerDied","Data":"533fffe7b5dc91c42b7dfab88f615dcfb3504ef1fd9397335d76ec5b6d210fc3"} Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.067479 4811 scope.go:117] "RemoveContainer" containerID="fe8c77bab3b8e332702c97361f6aeb62005575bd90fbb5ff9ee7ec98038c2ef9" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.107868 4811 scope.go:117] "RemoveContainer" containerID="13723dbaee57e9b82dd1e2d51fb0be3b2e4292b82db222eff45eb548bca41e95" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.112819 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e35360f-5379-40e3-91ce-9059ba0dc61c-log-httpd\") pod \"1e35360f-5379-40e3-91ce-9059ba0dc61c\" (UID: \"1e35360f-5379-40e3-91ce-9059ba0dc61c\") " Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.112901 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e35360f-5379-40e3-91ce-9059ba0dc61c-ceilometer-tls-certs\") pod \"1e35360f-5379-40e3-91ce-9059ba0dc61c\" (UID: \"1e35360f-5379-40e3-91ce-9059ba0dc61c\") " Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.112929 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e35360f-5379-40e3-91ce-9059ba0dc61c-run-httpd\") pod \"1e35360f-5379-40e3-91ce-9059ba0dc61c\" (UID: \"1e35360f-5379-40e3-91ce-9059ba0dc61c\") " Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.112988 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2hg8\" (UniqueName: \"kubernetes.io/projected/1e35360f-5379-40e3-91ce-9059ba0dc61c-kube-api-access-p2hg8\") pod \"1e35360f-5379-40e3-91ce-9059ba0dc61c\" (UID: \"1e35360f-5379-40e3-91ce-9059ba0dc61c\") " Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.113022 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e35360f-5379-40e3-91ce-9059ba0dc61c-combined-ca-bundle\") pod \"1e35360f-5379-40e3-91ce-9059ba0dc61c\" (UID: \"1e35360f-5379-40e3-91ce-9059ba0dc61c\") " Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.113216 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e35360f-5379-40e3-91ce-9059ba0dc61c-config-data\") pod \"1e35360f-5379-40e3-91ce-9059ba0dc61c\" (UID: \"1e35360f-5379-40e3-91ce-9059ba0dc61c\") " Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.113300 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e35360f-5379-40e3-91ce-9059ba0dc61c-scripts\") pod \"1e35360f-5379-40e3-91ce-9059ba0dc61c\" (UID: \"1e35360f-5379-40e3-91ce-9059ba0dc61c\") " Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.113333 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e35360f-5379-40e3-91ce-9059ba0dc61c-sg-core-conf-yaml\") pod \"1e35360f-5379-40e3-91ce-9059ba0dc61c\" (UID: \"1e35360f-5379-40e3-91ce-9059ba0dc61c\") " Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.115367 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e35360f-5379-40e3-91ce-9059ba0dc61c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1e35360f-5379-40e3-91ce-9059ba0dc61c" (UID: "1e35360f-5379-40e3-91ce-9059ba0dc61c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.115538 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e35360f-5379-40e3-91ce-9059ba0dc61c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1e35360f-5379-40e3-91ce-9059ba0dc61c" (UID: "1e35360f-5379-40e3-91ce-9059ba0dc61c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.122709 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e35360f-5379-40e3-91ce-9059ba0dc61c-kube-api-access-p2hg8" (OuterVolumeSpecName: "kube-api-access-p2hg8") pod "1e35360f-5379-40e3-91ce-9059ba0dc61c" (UID: "1e35360f-5379-40e3-91ce-9059ba0dc61c"). InnerVolumeSpecName "kube-api-access-p2hg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.124823 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e35360f-5379-40e3-91ce-9059ba0dc61c-scripts" (OuterVolumeSpecName: "scripts") pod "1e35360f-5379-40e3-91ce-9059ba0dc61c" (UID: "1e35360f-5379-40e3-91ce-9059ba0dc61c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.132746 4811 scope.go:117] "RemoveContainer" containerID="3170af1924d6d7ce2e07904502348a4fb4972889f8209a02614e12890a90b19d" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.157648 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e35360f-5379-40e3-91ce-9059ba0dc61c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1e35360f-5379-40e3-91ce-9059ba0dc61c" (UID: "1e35360f-5379-40e3-91ce-9059ba0dc61c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.174869 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e35360f-5379-40e3-91ce-9059ba0dc61c-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "1e35360f-5379-40e3-91ce-9059ba0dc61c" (UID: "1e35360f-5379-40e3-91ce-9059ba0dc61c"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.193965 4811 scope.go:117] "RemoveContainer" containerID="ad74386164c171b1c59842b6cbb67f4d15204589ba4731c30b9bbe041a9d4b77" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.211240 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e35360f-5379-40e3-91ce-9059ba0dc61c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e35360f-5379-40e3-91ce-9059ba0dc61c" (UID: "1e35360f-5379-40e3-91ce-9059ba0dc61c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.216325 4811 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e35360f-5379-40e3-91ce-9059ba0dc61c-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.216471 4811 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e35360f-5379-40e3-91ce-9059ba0dc61c-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.216554 4811 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e35360f-5379-40e3-91ce-9059ba0dc61c-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.216621 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2hg8\" (UniqueName: \"kubernetes.io/projected/1e35360f-5379-40e3-91ce-9059ba0dc61c-kube-api-access-p2hg8\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.216687 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e35360f-5379-40e3-91ce-9059ba0dc61c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.216756 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e35360f-5379-40e3-91ce-9059ba0dc61c-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.216837 4811 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e35360f-5379-40e3-91ce-9059ba0dc61c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.220786 4811 scope.go:117] "RemoveContainer" containerID="fe8c77bab3b8e332702c97361f6aeb62005575bd90fbb5ff9ee7ec98038c2ef9" Feb 19 10:29:21 crc kubenswrapper[4811]: E0219 10:29:21.221979 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe8c77bab3b8e332702c97361f6aeb62005575bd90fbb5ff9ee7ec98038c2ef9\": container with ID starting with fe8c77bab3b8e332702c97361f6aeb62005575bd90fbb5ff9ee7ec98038c2ef9 not found: ID does not exist" containerID="fe8c77bab3b8e332702c97361f6aeb62005575bd90fbb5ff9ee7ec98038c2ef9" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.222058 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe8c77bab3b8e332702c97361f6aeb62005575bd90fbb5ff9ee7ec98038c2ef9"} err="failed to get container status \"fe8c77bab3b8e332702c97361f6aeb62005575bd90fbb5ff9ee7ec98038c2ef9\": rpc error: code = NotFound desc = could not find container \"fe8c77bab3b8e332702c97361f6aeb62005575bd90fbb5ff9ee7ec98038c2ef9\": container with ID starting with fe8c77bab3b8e332702c97361f6aeb62005575bd90fbb5ff9ee7ec98038c2ef9 not found: ID does not exist" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.222087 4811 scope.go:117] "RemoveContainer" containerID="13723dbaee57e9b82dd1e2d51fb0be3b2e4292b82db222eff45eb548bca41e95" Feb 19 10:29:21 crc kubenswrapper[4811]: E0219 10:29:21.222537 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13723dbaee57e9b82dd1e2d51fb0be3b2e4292b82db222eff45eb548bca41e95\": container with ID starting with 13723dbaee57e9b82dd1e2d51fb0be3b2e4292b82db222eff45eb548bca41e95 not found: ID does not exist" containerID="13723dbaee57e9b82dd1e2d51fb0be3b2e4292b82db222eff45eb548bca41e95" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.222614 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13723dbaee57e9b82dd1e2d51fb0be3b2e4292b82db222eff45eb548bca41e95"} err="failed to get container status \"13723dbaee57e9b82dd1e2d51fb0be3b2e4292b82db222eff45eb548bca41e95\": rpc error: code = NotFound desc = could not find container \"13723dbaee57e9b82dd1e2d51fb0be3b2e4292b82db222eff45eb548bca41e95\": container with ID starting with 13723dbaee57e9b82dd1e2d51fb0be3b2e4292b82db222eff45eb548bca41e95 not found: ID does not exist" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.222652 4811 scope.go:117] "RemoveContainer" containerID="3170af1924d6d7ce2e07904502348a4fb4972889f8209a02614e12890a90b19d" Feb 19 10:29:21 crc kubenswrapper[4811]: E0219 10:29:21.223030 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3170af1924d6d7ce2e07904502348a4fb4972889f8209a02614e12890a90b19d\": container with ID starting with 3170af1924d6d7ce2e07904502348a4fb4972889f8209a02614e12890a90b19d not found: ID does not exist" containerID="3170af1924d6d7ce2e07904502348a4fb4972889f8209a02614e12890a90b19d" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.223063 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3170af1924d6d7ce2e07904502348a4fb4972889f8209a02614e12890a90b19d"} err="failed to get container status \"3170af1924d6d7ce2e07904502348a4fb4972889f8209a02614e12890a90b19d\": rpc error: code = NotFound desc = could not find container \"3170af1924d6d7ce2e07904502348a4fb4972889f8209a02614e12890a90b19d\": container with ID starting with 3170af1924d6d7ce2e07904502348a4fb4972889f8209a02614e12890a90b19d not found: ID does not exist" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.223082 4811 scope.go:117] "RemoveContainer" containerID="ad74386164c171b1c59842b6cbb67f4d15204589ba4731c30b9bbe041a9d4b77" Feb 19 10:29:21 crc kubenswrapper[4811]: E0219 10:29:21.223357 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad74386164c171b1c59842b6cbb67f4d15204589ba4731c30b9bbe041a9d4b77\": container with ID starting with ad74386164c171b1c59842b6cbb67f4d15204589ba4731c30b9bbe041a9d4b77 not found: ID does not exist" containerID="ad74386164c171b1c59842b6cbb67f4d15204589ba4731c30b9bbe041a9d4b77" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.223394 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad74386164c171b1c59842b6cbb67f4d15204589ba4731c30b9bbe041a9d4b77"} err="failed to get container status \"ad74386164c171b1c59842b6cbb67f4d15204589ba4731c30b9bbe041a9d4b77\": rpc error: code = NotFound desc = could not find container \"ad74386164c171b1c59842b6cbb67f4d15204589ba4731c30b9bbe041a9d4b77\": container with ID starting with ad74386164c171b1c59842b6cbb67f4d15204589ba4731c30b9bbe041a9d4b77 not found: ID does not exist" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.223416 4811 scope.go:117] "RemoveContainer" containerID="fe8c77bab3b8e332702c97361f6aeb62005575bd90fbb5ff9ee7ec98038c2ef9" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.223734 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe8c77bab3b8e332702c97361f6aeb62005575bd90fbb5ff9ee7ec98038c2ef9"} err="failed to get container status \"fe8c77bab3b8e332702c97361f6aeb62005575bd90fbb5ff9ee7ec98038c2ef9\": rpc error: code = NotFound desc = could not find container \"fe8c77bab3b8e332702c97361f6aeb62005575bd90fbb5ff9ee7ec98038c2ef9\": container with ID starting with fe8c77bab3b8e332702c97361f6aeb62005575bd90fbb5ff9ee7ec98038c2ef9 not found: ID does not exist" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.223766 4811 scope.go:117] "RemoveContainer" containerID="13723dbaee57e9b82dd1e2d51fb0be3b2e4292b82db222eff45eb548bca41e95" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.224035 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13723dbaee57e9b82dd1e2d51fb0be3b2e4292b82db222eff45eb548bca41e95"} err="failed to get container status \"13723dbaee57e9b82dd1e2d51fb0be3b2e4292b82db222eff45eb548bca41e95\": rpc error: code = NotFound desc = could not find container \"13723dbaee57e9b82dd1e2d51fb0be3b2e4292b82db222eff45eb548bca41e95\": container with ID starting with 13723dbaee57e9b82dd1e2d51fb0be3b2e4292b82db222eff45eb548bca41e95 not found: ID does not exist" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.224071 4811 scope.go:117] "RemoveContainer" containerID="3170af1924d6d7ce2e07904502348a4fb4972889f8209a02614e12890a90b19d" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.224294 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3170af1924d6d7ce2e07904502348a4fb4972889f8209a02614e12890a90b19d"} err="failed to get container status \"3170af1924d6d7ce2e07904502348a4fb4972889f8209a02614e12890a90b19d\": rpc error: code = NotFound desc = could not find container \"3170af1924d6d7ce2e07904502348a4fb4972889f8209a02614e12890a90b19d\": container with ID starting with 3170af1924d6d7ce2e07904502348a4fb4972889f8209a02614e12890a90b19d not found: ID does not exist" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.224323 4811 scope.go:117] "RemoveContainer" containerID="ad74386164c171b1c59842b6cbb67f4d15204589ba4731c30b9bbe041a9d4b77" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.224649 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad74386164c171b1c59842b6cbb67f4d15204589ba4731c30b9bbe041a9d4b77"} err="failed to get container status \"ad74386164c171b1c59842b6cbb67f4d15204589ba4731c30b9bbe041a9d4b77\": rpc error: code = NotFound desc = could not find container \"ad74386164c171b1c59842b6cbb67f4d15204589ba4731c30b9bbe041a9d4b77\": container with ID starting with ad74386164c171b1c59842b6cbb67f4d15204589ba4731c30b9bbe041a9d4b77 not found: ID does not exist" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.224676 4811 scope.go:117] "RemoveContainer" containerID="fe8c77bab3b8e332702c97361f6aeb62005575bd90fbb5ff9ee7ec98038c2ef9" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.225080 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe8c77bab3b8e332702c97361f6aeb62005575bd90fbb5ff9ee7ec98038c2ef9"} err="failed to get container status \"fe8c77bab3b8e332702c97361f6aeb62005575bd90fbb5ff9ee7ec98038c2ef9\": rpc error: code = NotFound desc = could not find container \"fe8c77bab3b8e332702c97361f6aeb62005575bd90fbb5ff9ee7ec98038c2ef9\": container with ID starting with fe8c77bab3b8e332702c97361f6aeb62005575bd90fbb5ff9ee7ec98038c2ef9 not found: ID does not exist" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.225102 4811 scope.go:117] "RemoveContainer" containerID="13723dbaee57e9b82dd1e2d51fb0be3b2e4292b82db222eff45eb548bca41e95" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.225355 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13723dbaee57e9b82dd1e2d51fb0be3b2e4292b82db222eff45eb548bca41e95"} err="failed to get container status \"13723dbaee57e9b82dd1e2d51fb0be3b2e4292b82db222eff45eb548bca41e95\": rpc error: code = NotFound desc = could not find container \"13723dbaee57e9b82dd1e2d51fb0be3b2e4292b82db222eff45eb548bca41e95\": container with ID starting with 13723dbaee57e9b82dd1e2d51fb0be3b2e4292b82db222eff45eb548bca41e95 not found: ID does not exist" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.225384 4811 scope.go:117] "RemoveContainer" containerID="3170af1924d6d7ce2e07904502348a4fb4972889f8209a02614e12890a90b19d" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.225747 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3170af1924d6d7ce2e07904502348a4fb4972889f8209a02614e12890a90b19d"} err="failed to get container status \"3170af1924d6d7ce2e07904502348a4fb4972889f8209a02614e12890a90b19d\": rpc error: code = NotFound desc = could not find container \"3170af1924d6d7ce2e07904502348a4fb4972889f8209a02614e12890a90b19d\": container with ID starting with 3170af1924d6d7ce2e07904502348a4fb4972889f8209a02614e12890a90b19d not found: ID does not exist" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.225768 4811 scope.go:117] "RemoveContainer" containerID="ad74386164c171b1c59842b6cbb67f4d15204589ba4731c30b9bbe041a9d4b77" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.226080 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad74386164c171b1c59842b6cbb67f4d15204589ba4731c30b9bbe041a9d4b77"} err="failed to get container status \"ad74386164c171b1c59842b6cbb67f4d15204589ba4731c30b9bbe041a9d4b77\": rpc error: code = NotFound desc = could not find container \"ad74386164c171b1c59842b6cbb67f4d15204589ba4731c30b9bbe041a9d4b77\": container with ID starting with ad74386164c171b1c59842b6cbb67f4d15204589ba4731c30b9bbe041a9d4b77 not found: ID does not exist" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.226119 4811 scope.go:117] "RemoveContainer" containerID="fe8c77bab3b8e332702c97361f6aeb62005575bd90fbb5ff9ee7ec98038c2ef9" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.226509 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe8c77bab3b8e332702c97361f6aeb62005575bd90fbb5ff9ee7ec98038c2ef9"} err="failed to get container status \"fe8c77bab3b8e332702c97361f6aeb62005575bd90fbb5ff9ee7ec98038c2ef9\": rpc error: code = NotFound desc = could not find container \"fe8c77bab3b8e332702c97361f6aeb62005575bd90fbb5ff9ee7ec98038c2ef9\": container with ID starting with fe8c77bab3b8e332702c97361f6aeb62005575bd90fbb5ff9ee7ec98038c2ef9 not found: ID does not exist" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.226543 4811 scope.go:117] "RemoveContainer" containerID="13723dbaee57e9b82dd1e2d51fb0be3b2e4292b82db222eff45eb548bca41e95" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.226824 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13723dbaee57e9b82dd1e2d51fb0be3b2e4292b82db222eff45eb548bca41e95"} err="failed to get container status \"13723dbaee57e9b82dd1e2d51fb0be3b2e4292b82db222eff45eb548bca41e95\": rpc error: code = NotFound desc = could not find container \"13723dbaee57e9b82dd1e2d51fb0be3b2e4292b82db222eff45eb548bca41e95\": container with ID starting with 13723dbaee57e9b82dd1e2d51fb0be3b2e4292b82db222eff45eb548bca41e95 not found: ID does not exist" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.226856 4811 scope.go:117] "RemoveContainer" containerID="3170af1924d6d7ce2e07904502348a4fb4972889f8209a02614e12890a90b19d" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.227097 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3170af1924d6d7ce2e07904502348a4fb4972889f8209a02614e12890a90b19d"} err="failed to get container status \"3170af1924d6d7ce2e07904502348a4fb4972889f8209a02614e12890a90b19d\": rpc error: code = NotFound desc = could not find container \"3170af1924d6d7ce2e07904502348a4fb4972889f8209a02614e12890a90b19d\": container with ID starting with 3170af1924d6d7ce2e07904502348a4fb4972889f8209a02614e12890a90b19d not found: ID does not exist" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.227122 4811 scope.go:117] "RemoveContainer" containerID="ad74386164c171b1c59842b6cbb67f4d15204589ba4731c30b9bbe041a9d4b77" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.227358 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad74386164c171b1c59842b6cbb67f4d15204589ba4731c30b9bbe041a9d4b77"} err="failed to get container status \"ad74386164c171b1c59842b6cbb67f4d15204589ba4731c30b9bbe041a9d4b77\": rpc error: code = NotFound desc = could not find container \"ad74386164c171b1c59842b6cbb67f4d15204589ba4731c30b9bbe041a9d4b77\": container with ID starting with ad74386164c171b1c59842b6cbb67f4d15204589ba4731c30b9bbe041a9d4b77 not found: ID does not exist" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.236508 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e35360f-5379-40e3-91ce-9059ba0dc61c-config-data" (OuterVolumeSpecName: "config-data") pod "1e35360f-5379-40e3-91ce-9059ba0dc61c" (UID: "1e35360f-5379-40e3-91ce-9059ba0dc61c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.319364 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e35360f-5379-40e3-91ce-9059ba0dc61c-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.415814 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.430622 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.444348 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:29:21 crc kubenswrapper[4811]: E0219 10:29:21.445026 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e35360f-5379-40e3-91ce-9059ba0dc61c" containerName="sg-core" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.445050 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e35360f-5379-40e3-91ce-9059ba0dc61c" containerName="sg-core" Feb 19 10:29:21 crc kubenswrapper[4811]: E0219 10:29:21.445065 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e35360f-5379-40e3-91ce-9059ba0dc61c" containerName="ceilometer-notification-agent" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.445073 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e35360f-5379-40e3-91ce-9059ba0dc61c" containerName="ceilometer-notification-agent" Feb 19 10:29:21 crc kubenswrapper[4811]: E0219 10:29:21.445106 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e35360f-5379-40e3-91ce-9059ba0dc61c" containerName="ceilometer-central-agent" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.445112 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e35360f-5379-40e3-91ce-9059ba0dc61c" containerName="ceilometer-central-agent" Feb 19 10:29:21 crc kubenswrapper[4811]: E0219 10:29:21.445122 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e35360f-5379-40e3-91ce-9059ba0dc61c" containerName="proxy-httpd" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.445128 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e35360f-5379-40e3-91ce-9059ba0dc61c" containerName="proxy-httpd" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.445324 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e35360f-5379-40e3-91ce-9059ba0dc61c" containerName="ceilometer-central-agent" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.445345 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e35360f-5379-40e3-91ce-9059ba0dc61c" containerName="proxy-httpd" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.445358 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e35360f-5379-40e3-91ce-9059ba0dc61c" containerName="sg-core" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.445369 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e35360f-5379-40e3-91ce-9059ba0dc61c" containerName="ceilometer-notification-agent" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.447099 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.449525 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.449549 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.450267 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.478844 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.524230 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6455a4c1-e3c5-4852-ae89-11be1b0c3b98\") " pod="openstack/ceilometer-0" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.524314 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-scripts\") pod \"ceilometer-0\" (UID: \"6455a4c1-e3c5-4852-ae89-11be1b0c3b98\") " pod="openstack/ceilometer-0" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.524359 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6455a4c1-e3c5-4852-ae89-11be1b0c3b98\") " pod="openstack/ceilometer-0" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.524388 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-log-httpd\") pod \"ceilometer-0\" (UID: \"6455a4c1-e3c5-4852-ae89-11be1b0c3b98\") " pod="openstack/ceilometer-0" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.524413 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pc9h\" (UniqueName: \"kubernetes.io/projected/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-kube-api-access-2pc9h\") pod \"ceilometer-0\" (UID: \"6455a4c1-e3c5-4852-ae89-11be1b0c3b98\") " pod="openstack/ceilometer-0" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.524794 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-run-httpd\") pod \"ceilometer-0\" (UID: \"6455a4c1-e3c5-4852-ae89-11be1b0c3b98\") " pod="openstack/ceilometer-0" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.524959 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6455a4c1-e3c5-4852-ae89-11be1b0c3b98\") " pod="openstack/ceilometer-0" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.525045 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-config-data\") pod \"ceilometer-0\" (UID: \"6455a4c1-e3c5-4852-ae89-11be1b0c3b98\") " pod="openstack/ceilometer-0" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.627081 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-run-httpd\") pod \"ceilometer-0\" (UID: \"6455a4c1-e3c5-4852-ae89-11be1b0c3b98\") " pod="openstack/ceilometer-0" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.627810 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6455a4c1-e3c5-4852-ae89-11be1b0c3b98\") " pod="openstack/ceilometer-0" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.627919 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-config-data\") pod \"ceilometer-0\" (UID: \"6455a4c1-e3c5-4852-ae89-11be1b0c3b98\") " pod="openstack/ceilometer-0" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.627678 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-run-httpd\") pod \"ceilometer-0\" (UID: \"6455a4c1-e3c5-4852-ae89-11be1b0c3b98\") " pod="openstack/ceilometer-0" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.628119 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6455a4c1-e3c5-4852-ae89-11be1b0c3b98\") " pod="openstack/ceilometer-0" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.628379 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-scripts\") pod \"ceilometer-0\" (UID: \"6455a4c1-e3c5-4852-ae89-11be1b0c3b98\") " pod="openstack/ceilometer-0" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.628513 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6455a4c1-e3c5-4852-ae89-11be1b0c3b98\") " pod="openstack/ceilometer-0" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.628594 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-log-httpd\") pod \"ceilometer-0\" (UID: \"6455a4c1-e3c5-4852-ae89-11be1b0c3b98\") " pod="openstack/ceilometer-0" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.628644 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pc9h\" (UniqueName: \"kubernetes.io/projected/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-kube-api-access-2pc9h\") pod \"ceilometer-0\" (UID: \"6455a4c1-e3c5-4852-ae89-11be1b0c3b98\") " pod="openstack/ceilometer-0" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.629144 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-log-httpd\") pod \"ceilometer-0\" (UID: \"6455a4c1-e3c5-4852-ae89-11be1b0c3b98\") " pod="openstack/ceilometer-0" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.633667 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6455a4c1-e3c5-4852-ae89-11be1b0c3b98\") " pod="openstack/ceilometer-0" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.634022 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6455a4c1-e3c5-4852-ae89-11be1b0c3b98\") " pod="openstack/ceilometer-0" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.634120 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-scripts\") pod \"ceilometer-0\" (UID: \"6455a4c1-e3c5-4852-ae89-11be1b0c3b98\") " pod="openstack/ceilometer-0" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.634993 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6455a4c1-e3c5-4852-ae89-11be1b0c3b98\") " pod="openstack/ceilometer-0" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.636287 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-config-data\") pod \"ceilometer-0\" (UID: \"6455a4c1-e3c5-4852-ae89-11be1b0c3b98\") " pod="openstack/ceilometer-0" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.648026 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pc9h\" (UniqueName: \"kubernetes.io/projected/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-kube-api-access-2pc9h\") pod \"ceilometer-0\" (UID: \"6455a4c1-e3c5-4852-ae89-11be1b0c3b98\") " pod="openstack/ceilometer-0" Feb 19 10:29:21 crc kubenswrapper[4811]: I0219 10:29:21.764272 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:29:22 crc kubenswrapper[4811]: I0219 10:29:22.215499 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:29:22 crc kubenswrapper[4811]: W0219 10:29:22.222651 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6455a4c1_e3c5_4852_ae89_11be1b0c3b98.slice/crio-29385d7cfe910d905fe23c6da57572488ef955ab5ce7079d5d31240abcca3e24 WatchSource:0}: Error finding container 29385d7cfe910d905fe23c6da57572488ef955ab5ce7079d5d31240abcca3e24: Status 404 returned error can't find the container with id 29385d7cfe910d905fe23c6da57572488ef955ab5ce7079d5d31240abcca3e24 Feb 19 10:29:22 crc kubenswrapper[4811]: I0219 10:29:22.599155 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e35360f-5379-40e3-91ce-9059ba0dc61c" path="/var/lib/kubelet/pods/1e35360f-5379-40e3-91ce-9059ba0dc61c/volumes" Feb 19 10:29:23 crc kubenswrapper[4811]: I0219 10:29:23.103077 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6455a4c1-e3c5-4852-ae89-11be1b0c3b98","Type":"ContainerStarted","Data":"9118c4e25ef4eab041c586a685c361d49ea3a31df89267c418da02f512411d58"} Feb 19 10:29:23 crc kubenswrapper[4811]: I0219 10:29:23.103343 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6455a4c1-e3c5-4852-ae89-11be1b0c3b98","Type":"ContainerStarted","Data":"29385d7cfe910d905fe23c6da57572488ef955ab5ce7079d5d31240abcca3e24"} Feb 19 10:29:24 crc kubenswrapper[4811]: I0219 10:29:24.131711 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6455a4c1-e3c5-4852-ae89-11be1b0c3b98","Type":"ContainerStarted","Data":"12870aa7025ecd628c0dc0ac428535905cb3222e5e60bf333fd73df42be9c674"} Feb 19 10:29:25 crc kubenswrapper[4811]: I0219 10:29:25.149998 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6455a4c1-e3c5-4852-ae89-11be1b0c3b98","Type":"ContainerStarted","Data":"a2e8e603045076a45f6c6c5b9ae04a9ec7f8a707021c8ed61ed075484a0f6df2"} Feb 19 10:29:25 crc kubenswrapper[4811]: I0219 10:29:25.409737 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-766dfcbd56-lqbrt" podUID="4ab45e9b-6cfc-48a3-b603-f47d47bbd510" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.144:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.144:8443: connect: connection refused" Feb 19 10:29:26 crc kubenswrapper[4811]: I0219 10:29:26.162937 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6455a4c1-e3c5-4852-ae89-11be1b0c3b98","Type":"ContainerStarted","Data":"26907b58e203ded2c3a246f24f10e058a93e8a255492259719dc2ea2ca388546"} Feb 19 10:29:26 crc kubenswrapper[4811]: I0219 10:29:26.163340 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 10:29:26 crc kubenswrapper[4811]: I0219 10:29:26.189562 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.556056679 podStartE2EDuration="5.189540276s" podCreationTimestamp="2026-02-19 10:29:21 +0000 UTC" firstStartedPulling="2026-02-19 10:29:22.225237888 +0000 UTC m=+1250.751702574" lastFinishedPulling="2026-02-19 10:29:25.858721465 +0000 UTC m=+1254.385186171" observedRunningTime="2026-02-19 10:29:26.18577534 +0000 UTC m=+1254.712240036" watchObservedRunningTime="2026-02-19 10:29:26.189540276 +0000 UTC m=+1254.716004972" Feb 19 10:29:26 crc kubenswrapper[4811]: I0219 10:29:26.788531 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:29:26 crc kubenswrapper[4811]: I0219 10:29:26.788935 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:29:26 crc kubenswrapper[4811]: I0219 10:29:26.788997 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" Feb 19 10:29:26 crc kubenswrapper[4811]: I0219 10:29:26.789982 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"55d19b42477425a3b99b473164b0990b42ac48a5245631a7eb91475e7635a649"} pod="openshift-machine-config-operator/machine-config-daemon-v97zm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:29:26 crc kubenswrapper[4811]: I0219 10:29:26.790060 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" containerID="cri-o://55d19b42477425a3b99b473164b0990b42ac48a5245631a7eb91475e7635a649" gracePeriod=600 Feb 19 10:29:27 crc kubenswrapper[4811]: I0219 10:29:27.176206 4811 generic.go:334] "Generic (PLEG): container finished" podID="e2e42c80-bece-4066-abad-05bc661d33f5" containerID="55d19b42477425a3b99b473164b0990b42ac48a5245631a7eb91475e7635a649" exitCode=0 Feb 19 10:29:27 crc kubenswrapper[4811]: I0219 10:29:27.176229 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" event={"ID":"e2e42c80-bece-4066-abad-05bc661d33f5","Type":"ContainerDied","Data":"55d19b42477425a3b99b473164b0990b42ac48a5245631a7eb91475e7635a649"} Feb 19 10:29:27 crc kubenswrapper[4811]: I0219 10:29:27.176304 4811 scope.go:117] "RemoveContainer" containerID="861076d931739c868265cf3b64ebb02f51f96fc8b2fc1a6337148664ae662a28" Feb 19 10:29:28 crc kubenswrapper[4811]: I0219 10:29:28.189926 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" event={"ID":"e2e42c80-bece-4066-abad-05bc661d33f5","Type":"ContainerStarted","Data":"5583b6e6746c91e1b9699411335f3051851aaa5bd7775073594504ef9d9cecbd"} Feb 19 10:29:28 crc kubenswrapper[4811]: I0219 10:29:28.575950 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.094544 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-f4kvz"] Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.096329 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-f4kvz" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.101792 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.102885 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.115584 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-f4kvz"] Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.212426 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2207a190-4ccd-4560-890a-b9d6c5255f6a-scripts\") pod \"nova-cell0-cell-mapping-f4kvz\" (UID: \"2207a190-4ccd-4560-890a-b9d6c5255f6a\") " pod="openstack/nova-cell0-cell-mapping-f4kvz" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.212975 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77kfg\" (UniqueName: \"kubernetes.io/projected/2207a190-4ccd-4560-890a-b9d6c5255f6a-kube-api-access-77kfg\") pod \"nova-cell0-cell-mapping-f4kvz\" (UID: \"2207a190-4ccd-4560-890a-b9d6c5255f6a\") " pod="openstack/nova-cell0-cell-mapping-f4kvz" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.213037 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2207a190-4ccd-4560-890a-b9d6c5255f6a-config-data\") pod \"nova-cell0-cell-mapping-f4kvz\" (UID: \"2207a190-4ccd-4560-890a-b9d6c5255f6a\") " pod="openstack/nova-cell0-cell-mapping-f4kvz" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.213167 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2207a190-4ccd-4560-890a-b9d6c5255f6a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-f4kvz\" (UID: \"2207a190-4ccd-4560-890a-b9d6c5255f6a\") " pod="openstack/nova-cell0-cell-mapping-f4kvz" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.315147 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77kfg\" (UniqueName: \"kubernetes.io/projected/2207a190-4ccd-4560-890a-b9d6c5255f6a-kube-api-access-77kfg\") pod \"nova-cell0-cell-mapping-f4kvz\" (UID: \"2207a190-4ccd-4560-890a-b9d6c5255f6a\") " pod="openstack/nova-cell0-cell-mapping-f4kvz" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.315243 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2207a190-4ccd-4560-890a-b9d6c5255f6a-config-data\") pod \"nova-cell0-cell-mapping-f4kvz\" (UID: \"2207a190-4ccd-4560-890a-b9d6c5255f6a\") " pod="openstack/nova-cell0-cell-mapping-f4kvz" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.315399 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2207a190-4ccd-4560-890a-b9d6c5255f6a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-f4kvz\" (UID: \"2207a190-4ccd-4560-890a-b9d6c5255f6a\") " pod="openstack/nova-cell0-cell-mapping-f4kvz" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.315486 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2207a190-4ccd-4560-890a-b9d6c5255f6a-scripts\") pod \"nova-cell0-cell-mapping-f4kvz\" (UID: \"2207a190-4ccd-4560-890a-b9d6c5255f6a\") " pod="openstack/nova-cell0-cell-mapping-f4kvz" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.329013 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2207a190-4ccd-4560-890a-b9d6c5255f6a-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-f4kvz\" (UID: \"2207a190-4ccd-4560-890a-b9d6c5255f6a\") " pod="openstack/nova-cell0-cell-mapping-f4kvz" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.331987 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2207a190-4ccd-4560-890a-b9d6c5255f6a-scripts\") pod \"nova-cell0-cell-mapping-f4kvz\" (UID: \"2207a190-4ccd-4560-890a-b9d6c5255f6a\") " pod="openstack/nova-cell0-cell-mapping-f4kvz" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.336074 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2207a190-4ccd-4560-890a-b9d6c5255f6a-config-data\") pod \"nova-cell0-cell-mapping-f4kvz\" (UID: \"2207a190-4ccd-4560-890a-b9d6c5255f6a\") " pod="openstack/nova-cell0-cell-mapping-f4kvz" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.342297 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.343706 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.355480 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77kfg\" (UniqueName: \"kubernetes.io/projected/2207a190-4ccd-4560-890a-b9d6c5255f6a-kube-api-access-77kfg\") pod \"nova-cell0-cell-mapping-f4kvz\" (UID: \"2207a190-4ccd-4560-890a-b9d6c5255f6a\") " pod="openstack/nova-cell0-cell-mapping-f4kvz" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.364094 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.364288 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.377513 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.379664 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.386475 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.395031 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.422999 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-f4kvz" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.423269 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae\") " pod="openstack/nova-scheduler-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.423339 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tg5x\" (UniqueName: \"kubernetes.io/projected/1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae-kube-api-access-9tg5x\") pod \"nova-scheduler-0\" (UID: \"1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae\") " pod="openstack/nova-scheduler-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.423412 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae-config-data\") pod \"nova-scheduler-0\" (UID: \"1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae\") " pod="openstack/nova-scheduler-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.533551 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ff89198-de83-43ec-bff4-d2dcf9311438-logs\") pod \"nova-metadata-0\" (UID: \"9ff89198-de83-43ec-bff4-d2dcf9311438\") " pod="openstack/nova-metadata-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.533654 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae\") " pod="openstack/nova-scheduler-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.533673 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tg5x\" (UniqueName: \"kubernetes.io/projected/1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae-kube-api-access-9tg5x\") pod \"nova-scheduler-0\" (UID: \"1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae\") " pod="openstack/nova-scheduler-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.533717 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ff89198-de83-43ec-bff4-d2dcf9311438-config-data\") pod \"nova-metadata-0\" (UID: \"9ff89198-de83-43ec-bff4-d2dcf9311438\") " pod="openstack/nova-metadata-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.533738 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff89198-de83-43ec-bff4-d2dcf9311438-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9ff89198-de83-43ec-bff4-d2dcf9311438\") " pod="openstack/nova-metadata-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.533758 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9579\" (UniqueName: \"kubernetes.io/projected/9ff89198-de83-43ec-bff4-d2dcf9311438-kube-api-access-n9579\") pod \"nova-metadata-0\" (UID: \"9ff89198-de83-43ec-bff4-d2dcf9311438\") " pod="openstack/nova-metadata-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.533779 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae-config-data\") pod \"nova-scheduler-0\" (UID: \"1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae\") " pod="openstack/nova-scheduler-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.544224 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae-config-data\") pod \"nova-scheduler-0\" (UID: \"1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae\") " pod="openstack/nova-scheduler-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.548430 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae\") " pod="openstack/nova-scheduler-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.561879 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-zm5wz"] Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.573276 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-zm5wz" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.636466 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ff89198-de83-43ec-bff4-d2dcf9311438-config-data\") pod \"nova-metadata-0\" (UID: \"9ff89198-de83-43ec-bff4-d2dcf9311438\") " pod="openstack/nova-metadata-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.636550 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff89198-de83-43ec-bff4-d2dcf9311438-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9ff89198-de83-43ec-bff4-d2dcf9311438\") " pod="openstack/nova-metadata-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.636597 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/140c530d-9fec-4354-bada-2733c806bf1b-dns-svc\") pod \"dnsmasq-dns-757b4f8459-zm5wz\" (UID: \"140c530d-9fec-4354-bada-2733c806bf1b\") " pod="openstack/dnsmasq-dns-757b4f8459-zm5wz" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.636631 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9579\" (UniqueName: \"kubernetes.io/projected/9ff89198-de83-43ec-bff4-d2dcf9311438-kube-api-access-n9579\") pod \"nova-metadata-0\" (UID: \"9ff89198-de83-43ec-bff4-d2dcf9311438\") " pod="openstack/nova-metadata-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.636660 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/140c530d-9fec-4354-bada-2733c806bf1b-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-zm5wz\" (UID: \"140c530d-9fec-4354-bada-2733c806bf1b\") " pod="openstack/dnsmasq-dns-757b4f8459-zm5wz" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.636706 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25fpm\" (UniqueName: \"kubernetes.io/projected/140c530d-9fec-4354-bada-2733c806bf1b-kube-api-access-25fpm\") pod \"dnsmasq-dns-757b4f8459-zm5wz\" (UID: \"140c530d-9fec-4354-bada-2733c806bf1b\") " pod="openstack/dnsmasq-dns-757b4f8459-zm5wz" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.636762 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/140c530d-9fec-4354-bada-2733c806bf1b-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-zm5wz\" (UID: \"140c530d-9fec-4354-bada-2733c806bf1b\") " pod="openstack/dnsmasq-dns-757b4f8459-zm5wz" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.636824 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ff89198-de83-43ec-bff4-d2dcf9311438-logs\") pod \"nova-metadata-0\" (UID: \"9ff89198-de83-43ec-bff4-d2dcf9311438\") " pod="openstack/nova-metadata-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.636924 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/140c530d-9fec-4354-bada-2733c806bf1b-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-zm5wz\" (UID: \"140c530d-9fec-4354-bada-2733c806bf1b\") " pod="openstack/dnsmasq-dns-757b4f8459-zm5wz" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.636957 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/140c530d-9fec-4354-bada-2733c806bf1b-config\") pod \"dnsmasq-dns-757b4f8459-zm5wz\" (UID: \"140c530d-9fec-4354-bada-2733c806bf1b\") " pod="openstack/dnsmasq-dns-757b4f8459-zm5wz" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.641295 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ff89198-de83-43ec-bff4-d2dcf9311438-logs\") pod \"nova-metadata-0\" (UID: \"9ff89198-de83-43ec-bff4-d2dcf9311438\") " pod="openstack/nova-metadata-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.641598 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-zm5wz"] Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.656769 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ff89198-de83-43ec-bff4-d2dcf9311438-config-data\") pod \"nova-metadata-0\" (UID: \"9ff89198-de83-43ec-bff4-d2dcf9311438\") " pod="openstack/nova-metadata-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.657659 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tg5x\" (UniqueName: \"kubernetes.io/projected/1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae-kube-api-access-9tg5x\") pod \"nova-scheduler-0\" (UID: \"1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae\") " pod="openstack/nova-scheduler-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.663261 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9579\" (UniqueName: \"kubernetes.io/projected/9ff89198-de83-43ec-bff4-d2dcf9311438-kube-api-access-n9579\") pod \"nova-metadata-0\" (UID: \"9ff89198-de83-43ec-bff4-d2dcf9311438\") " pod="openstack/nova-metadata-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.663896 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.669464 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff89198-de83-43ec-bff4-d2dcf9311438-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9ff89198-de83-43ec-bff4-d2dcf9311438\") " pod="openstack/nova-metadata-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.685894 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.719983 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.724831 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.749734 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/140c530d-9fec-4354-bada-2733c806bf1b-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-zm5wz\" (UID: \"140c530d-9fec-4354-bada-2733c806bf1b\") " pod="openstack/dnsmasq-dns-757b4f8459-zm5wz" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.749819 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/140c530d-9fec-4354-bada-2733c806bf1b-config\") pod \"dnsmasq-dns-757b4f8459-zm5wz\" (UID: \"140c530d-9fec-4354-bada-2733c806bf1b\") " pod="openstack/dnsmasq-dns-757b4f8459-zm5wz" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.749919 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/140c530d-9fec-4354-bada-2733c806bf1b-dns-svc\") pod \"dnsmasq-dns-757b4f8459-zm5wz\" (UID: \"140c530d-9fec-4354-bada-2733c806bf1b\") " pod="openstack/dnsmasq-dns-757b4f8459-zm5wz" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.749950 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/140c530d-9fec-4354-bada-2733c806bf1b-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-zm5wz\" (UID: \"140c530d-9fec-4354-bada-2733c806bf1b\") " pod="openstack/dnsmasq-dns-757b4f8459-zm5wz" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.750011 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25fpm\" (UniqueName: \"kubernetes.io/projected/140c530d-9fec-4354-bada-2733c806bf1b-kube-api-access-25fpm\") pod \"dnsmasq-dns-757b4f8459-zm5wz\" (UID: \"140c530d-9fec-4354-bada-2733c806bf1b\") " pod="openstack/dnsmasq-dns-757b4f8459-zm5wz" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.750057 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/140c530d-9fec-4354-bada-2733c806bf1b-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-zm5wz\" (UID: \"140c530d-9fec-4354-bada-2733c806bf1b\") " pod="openstack/dnsmasq-dns-757b4f8459-zm5wz" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.752742 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/140c530d-9fec-4354-bada-2733c806bf1b-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-zm5wz\" (UID: \"140c530d-9fec-4354-bada-2733c806bf1b\") " pod="openstack/dnsmasq-dns-757b4f8459-zm5wz" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.753234 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/140c530d-9fec-4354-bada-2733c806bf1b-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-zm5wz\" (UID: \"140c530d-9fec-4354-bada-2733c806bf1b\") " pod="openstack/dnsmasq-dns-757b4f8459-zm5wz" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.754051 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/140c530d-9fec-4354-bada-2733c806bf1b-config\") pod \"dnsmasq-dns-757b4f8459-zm5wz\" (UID: \"140c530d-9fec-4354-bada-2733c806bf1b\") " pod="openstack/dnsmasq-dns-757b4f8459-zm5wz" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.754103 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/140c530d-9fec-4354-bada-2733c806bf1b-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-zm5wz\" (UID: \"140c530d-9fec-4354-bada-2733c806bf1b\") " pod="openstack/dnsmasq-dns-757b4f8459-zm5wz" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.754785 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.763742 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/140c530d-9fec-4354-bada-2733c806bf1b-dns-svc\") pod \"dnsmasq-dns-757b4f8459-zm5wz\" (UID: \"140c530d-9fec-4354-bada-2733c806bf1b\") " pod="openstack/dnsmasq-dns-757b4f8459-zm5wz" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.776111 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25fpm\" (UniqueName: \"kubernetes.io/projected/140c530d-9fec-4354-bada-2733c806bf1b-kube-api-access-25fpm\") pod \"dnsmasq-dns-757b4f8459-zm5wz\" (UID: \"140c530d-9fec-4354-bada-2733c806bf1b\") " pod="openstack/dnsmasq-dns-757b4f8459-zm5wz" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.793717 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.806580 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-zm5wz" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.849515 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.852916 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.855201 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.862991 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/086be806-4a18-4001-baae-5d3a05b4ed2a-logs\") pod \"nova-api-0\" (UID: \"086be806-4a18-4001-baae-5d3a05b4ed2a\") " pod="openstack/nova-api-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.863115 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qfcl\" (UniqueName: \"kubernetes.io/projected/086be806-4a18-4001-baae-5d3a05b4ed2a-kube-api-access-2qfcl\") pod \"nova-api-0\" (UID: \"086be806-4a18-4001-baae-5d3a05b4ed2a\") " pod="openstack/nova-api-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.863142 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/086be806-4a18-4001-baae-5d3a05b4ed2a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"086be806-4a18-4001-baae-5d3a05b4ed2a\") " pod="openstack/nova-api-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.863222 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/086be806-4a18-4001-baae-5d3a05b4ed2a-config-data\") pod \"nova-api-0\" (UID: \"086be806-4a18-4001-baae-5d3a05b4ed2a\") " pod="openstack/nova-api-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.875469 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.967078 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/086be806-4a18-4001-baae-5d3a05b4ed2a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"086be806-4a18-4001-baae-5d3a05b4ed2a\") " pod="openstack/nova-api-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.967169 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b84f775-8294-4ec6-8c83-1c6c3fc11604-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5b84f775-8294-4ec6-8c83-1c6c3fc11604\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.967280 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/086be806-4a18-4001-baae-5d3a05b4ed2a-config-data\") pod \"nova-api-0\" (UID: \"086be806-4a18-4001-baae-5d3a05b4ed2a\") " pod="openstack/nova-api-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.967497 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/086be806-4a18-4001-baae-5d3a05b4ed2a-logs\") pod \"nova-api-0\" (UID: \"086be806-4a18-4001-baae-5d3a05b4ed2a\") " pod="openstack/nova-api-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.967554 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qfcl\" (UniqueName: \"kubernetes.io/projected/086be806-4a18-4001-baae-5d3a05b4ed2a-kube-api-access-2qfcl\") pod \"nova-api-0\" (UID: \"086be806-4a18-4001-baae-5d3a05b4ed2a\") " pod="openstack/nova-api-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.967590 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt5qs\" (UniqueName: \"kubernetes.io/projected/5b84f775-8294-4ec6-8c83-1c6c3fc11604-kube-api-access-kt5qs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5b84f775-8294-4ec6-8c83-1c6c3fc11604\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.967638 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b84f775-8294-4ec6-8c83-1c6c3fc11604-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5b84f775-8294-4ec6-8c83-1c6c3fc11604\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.968857 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/086be806-4a18-4001-baae-5d3a05b4ed2a-logs\") pod \"nova-api-0\" (UID: \"086be806-4a18-4001-baae-5d3a05b4ed2a\") " pod="openstack/nova-api-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.976026 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/086be806-4a18-4001-baae-5d3a05b4ed2a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"086be806-4a18-4001-baae-5d3a05b4ed2a\") " pod="openstack/nova-api-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.978053 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/086be806-4a18-4001-baae-5d3a05b4ed2a-config-data\") pod \"nova-api-0\" (UID: \"086be806-4a18-4001-baae-5d3a05b4ed2a\") " pod="openstack/nova-api-0" Feb 19 10:29:29 crc kubenswrapper[4811]: I0219 10:29:29.988729 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qfcl\" (UniqueName: \"kubernetes.io/projected/086be806-4a18-4001-baae-5d3a05b4ed2a-kube-api-access-2qfcl\") pod \"nova-api-0\" (UID: \"086be806-4a18-4001-baae-5d3a05b4ed2a\") " pod="openstack/nova-api-0" Feb 19 10:29:30 crc kubenswrapper[4811]: I0219 10:29:30.069815 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt5qs\" (UniqueName: \"kubernetes.io/projected/5b84f775-8294-4ec6-8c83-1c6c3fc11604-kube-api-access-kt5qs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5b84f775-8294-4ec6-8c83-1c6c3fc11604\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:29:30 crc kubenswrapper[4811]: I0219 10:29:30.069861 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b84f775-8294-4ec6-8c83-1c6c3fc11604-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5b84f775-8294-4ec6-8c83-1c6c3fc11604\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:29:30 crc kubenswrapper[4811]: I0219 10:29:30.069878 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b84f775-8294-4ec6-8c83-1c6c3fc11604-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5b84f775-8294-4ec6-8c83-1c6c3fc11604\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:29:30 crc kubenswrapper[4811]: I0219 10:29:30.074411 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b84f775-8294-4ec6-8c83-1c6c3fc11604-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5b84f775-8294-4ec6-8c83-1c6c3fc11604\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:29:30 crc kubenswrapper[4811]: I0219 10:29:30.076280 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b84f775-8294-4ec6-8c83-1c6c3fc11604-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5b84f775-8294-4ec6-8c83-1c6c3fc11604\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:29:30 crc kubenswrapper[4811]: I0219 10:29:30.089853 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt5qs\" (UniqueName: \"kubernetes.io/projected/5b84f775-8294-4ec6-8c83-1c6c3fc11604-kube-api-access-kt5qs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5b84f775-8294-4ec6-8c83-1c6c3fc11604\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:29:30 crc kubenswrapper[4811]: I0219 10:29:30.130888 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:29:30 crc kubenswrapper[4811]: I0219 10:29:30.146634 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-f4kvz"] Feb 19 10:29:30 crc kubenswrapper[4811]: I0219 10:29:30.210086 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:29:30 crc kubenswrapper[4811]: I0219 10:29:30.249042 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-f4kvz" event={"ID":"2207a190-4ccd-4560-890a-b9d6c5255f6a","Type":"ContainerStarted","Data":"d33bba698ebe49bd1697ed95fc886f90782adc726e68434c16c0ccf19987e640"} Feb 19 10:29:30 crc kubenswrapper[4811]: I0219 10:29:30.389982 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:29:30 crc kubenswrapper[4811]: I0219 10:29:30.456375 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-2472l"] Feb 19 10:29:30 crc kubenswrapper[4811]: I0219 10:29:30.458239 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-2472l" Feb 19 10:29:30 crc kubenswrapper[4811]: I0219 10:29:30.464810 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 19 10:29:30 crc kubenswrapper[4811]: I0219 10:29:30.465092 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 19 10:29:30 crc kubenswrapper[4811]: I0219 10:29:30.480053 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-2472l"] Feb 19 10:29:30 crc kubenswrapper[4811]: I0219 10:29:30.490099 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:29:30 crc kubenswrapper[4811]: I0219 10:29:30.595935 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04-config-data\") pod \"nova-cell1-conductor-db-sync-2472l\" (UID: \"1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04\") " pod="openstack/nova-cell1-conductor-db-sync-2472l" Feb 19 10:29:30 crc kubenswrapper[4811]: I0219 10:29:30.596329 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04-scripts\") pod \"nova-cell1-conductor-db-sync-2472l\" (UID: \"1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04\") " pod="openstack/nova-cell1-conductor-db-sync-2472l" Feb 19 10:29:30 crc kubenswrapper[4811]: I0219 10:29:30.596366 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-2472l\" (UID: \"1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04\") " pod="openstack/nova-cell1-conductor-db-sync-2472l" Feb 19 10:29:30 crc kubenswrapper[4811]: I0219 10:29:30.596499 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87xfg\" (UniqueName: \"kubernetes.io/projected/1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04-kube-api-access-87xfg\") pod \"nova-cell1-conductor-db-sync-2472l\" (UID: \"1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04\") " pod="openstack/nova-cell1-conductor-db-sync-2472l" Feb 19 10:29:30 crc kubenswrapper[4811]: I0219 10:29:30.675944 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-zm5wz"] Feb 19 10:29:30 crc kubenswrapper[4811]: I0219 10:29:30.706258 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04-config-data\") pod \"nova-cell1-conductor-db-sync-2472l\" (UID: \"1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04\") " pod="openstack/nova-cell1-conductor-db-sync-2472l" Feb 19 10:29:30 crc kubenswrapper[4811]: I0219 10:29:30.706351 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04-scripts\") pod \"nova-cell1-conductor-db-sync-2472l\" (UID: \"1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04\") " pod="openstack/nova-cell1-conductor-db-sync-2472l" Feb 19 10:29:30 crc kubenswrapper[4811]: I0219 10:29:30.706393 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-2472l\" (UID: \"1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04\") " pod="openstack/nova-cell1-conductor-db-sync-2472l" Feb 19 10:29:30 crc kubenswrapper[4811]: I0219 10:29:30.706502 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87xfg\" (UniqueName: \"kubernetes.io/projected/1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04-kube-api-access-87xfg\") pod \"nova-cell1-conductor-db-sync-2472l\" (UID: \"1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04\") " pod="openstack/nova-cell1-conductor-db-sync-2472l" Feb 19 10:29:30 crc kubenswrapper[4811]: I0219 10:29:30.728962 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-2472l\" (UID: \"1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04\") " pod="openstack/nova-cell1-conductor-db-sync-2472l" Feb 19 10:29:30 crc kubenswrapper[4811]: I0219 10:29:30.765989 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04-scripts\") pod \"nova-cell1-conductor-db-sync-2472l\" (UID: \"1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04\") " pod="openstack/nova-cell1-conductor-db-sync-2472l" Feb 19 10:29:30 crc kubenswrapper[4811]: I0219 10:29:30.775826 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04-config-data\") pod \"nova-cell1-conductor-db-sync-2472l\" (UID: \"1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04\") " pod="openstack/nova-cell1-conductor-db-sync-2472l" Feb 19 10:29:30 crc kubenswrapper[4811]: I0219 10:29:30.780373 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87xfg\" (UniqueName: \"kubernetes.io/projected/1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04-kube-api-access-87xfg\") pod \"nova-cell1-conductor-db-sync-2472l\" (UID: \"1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04\") " pod="openstack/nova-cell1-conductor-db-sync-2472l" Feb 19 10:29:30 crc kubenswrapper[4811]: I0219 10:29:30.897863 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-2472l" Feb 19 10:29:31 crc kubenswrapper[4811]: I0219 10:29:31.029206 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:29:31 crc kubenswrapper[4811]: W0219 10:29:31.037733 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod086be806_4a18_4001_baae_5d3a05b4ed2a.slice/crio-a692aa4c53d2a372eb480cde8f1d1502e0ddc354ce5fbc230af2715509ba50ff WatchSource:0}: Error finding container a692aa4c53d2a372eb480cde8f1d1502e0ddc354ce5fbc230af2715509ba50ff: Status 404 returned error can't find the container with id a692aa4c53d2a372eb480cde8f1d1502e0ddc354ce5fbc230af2715509ba50ff Feb 19 10:29:31 crc kubenswrapper[4811]: I0219 10:29:31.047992 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 10:29:31 crc kubenswrapper[4811]: W0219 10:29:31.058741 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b84f775_8294_4ec6_8c83_1c6c3fc11604.slice/crio-e69f06a9eccd631ba9849a47bd2fd4aac2841f3a97ed5ea557e1df4e0f3eb655 WatchSource:0}: Error finding container e69f06a9eccd631ba9849a47bd2fd4aac2841f3a97ed5ea557e1df4e0f3eb655: Status 404 returned error can't find the container with id e69f06a9eccd631ba9849a47bd2fd4aac2841f3a97ed5ea557e1df4e0f3eb655 Feb 19 10:29:31 crc kubenswrapper[4811]: I0219 10:29:31.265430 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae","Type":"ContainerStarted","Data":"e0d2ba84862faffead8948bf770827876be338fdc045bde9e35338d1cc3f0a87"} Feb 19 10:29:31 crc kubenswrapper[4811]: I0219 10:29:31.277486 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5b84f775-8294-4ec6-8c83-1c6c3fc11604","Type":"ContainerStarted","Data":"e69f06a9eccd631ba9849a47bd2fd4aac2841f3a97ed5ea557e1df4e0f3eb655"} Feb 19 10:29:31 crc kubenswrapper[4811]: I0219 10:29:31.283107 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"086be806-4a18-4001-baae-5d3a05b4ed2a","Type":"ContainerStarted","Data":"a692aa4c53d2a372eb480cde8f1d1502e0ddc354ce5fbc230af2715509ba50ff"} Feb 19 10:29:31 crc kubenswrapper[4811]: I0219 10:29:31.286143 4811 generic.go:334] "Generic (PLEG): container finished" podID="140c530d-9fec-4354-bada-2733c806bf1b" containerID="9108a83735f14a64e07149e9a7f591657172199cca06768893c6d34b737c5fa4" exitCode=0 Feb 19 10:29:31 crc kubenswrapper[4811]: I0219 10:29:31.286211 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-zm5wz" event={"ID":"140c530d-9fec-4354-bada-2733c806bf1b","Type":"ContainerDied","Data":"9108a83735f14a64e07149e9a7f591657172199cca06768893c6d34b737c5fa4"} Feb 19 10:29:31 crc kubenswrapper[4811]: I0219 10:29:31.286238 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-zm5wz" event={"ID":"140c530d-9fec-4354-bada-2733c806bf1b","Type":"ContainerStarted","Data":"0851ad276b4baa5d8532f207d396a37040371baf97cafb2aa8c5e028977af0a6"} Feb 19 10:29:31 crc kubenswrapper[4811]: I0219 10:29:31.291853 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-f4kvz" event={"ID":"2207a190-4ccd-4560-890a-b9d6c5255f6a","Type":"ContainerStarted","Data":"d44b7a74842106a257153acf8d61b6e2d23f7b8f97742bd3dafb8b6adee6af59"} Feb 19 10:29:31 crc kubenswrapper[4811]: I0219 10:29:31.294721 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9ff89198-de83-43ec-bff4-d2dcf9311438","Type":"ContainerStarted","Data":"186cf1fe95d61c06d62edbc6626cfebfe61b24a588052c15990c75557f4bfbbb"} Feb 19 10:29:31 crc kubenswrapper[4811]: I0219 10:29:31.335155 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-f4kvz" podStartSLOduration=2.33513713 podStartE2EDuration="2.33513713s" podCreationTimestamp="2026-02-19 10:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:29:31.327244878 +0000 UTC m=+1259.853709584" watchObservedRunningTime="2026-02-19 10:29:31.33513713 +0000 UTC m=+1259.861601816" Feb 19 10:29:31 crc kubenswrapper[4811]: I0219 10:29:31.490136 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-2472l"] Feb 19 10:29:32 crc kubenswrapper[4811]: I0219 10:29:32.309482 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-2472l" event={"ID":"1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04","Type":"ContainerStarted","Data":"c7a0b025e6f7bcbac0c12794f24bf43326935fcfdae32beceebac78261f1e559"} Feb 19 10:29:32 crc kubenswrapper[4811]: I0219 10:29:32.310152 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-2472l" event={"ID":"1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04","Type":"ContainerStarted","Data":"f093fd9f1ff140acef93e541a94329822c58af8aa8f110e399b83dd775bb4153"} Feb 19 10:29:32 crc kubenswrapper[4811]: I0219 10:29:32.312885 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-zm5wz" event={"ID":"140c530d-9fec-4354-bada-2733c806bf1b","Type":"ContainerStarted","Data":"c434de7bc4106edc53948606a9d2518b1e48f27659ac8b1f7431cf71e81a5ef9"} Feb 19 10:29:32 crc kubenswrapper[4811]: I0219 10:29:32.336798 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-2472l" podStartSLOduration=2.336759236 podStartE2EDuration="2.336759236s" podCreationTimestamp="2026-02-19 10:29:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:29:32.323275442 +0000 UTC m=+1260.849740128" watchObservedRunningTime="2026-02-19 10:29:32.336759236 +0000 UTC m=+1260.863223932" Feb 19 10:29:32 crc kubenswrapper[4811]: I0219 10:29:32.352096 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-zm5wz" podStartSLOduration=3.352074068 podStartE2EDuration="3.352074068s" podCreationTimestamp="2026-02-19 10:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:29:32.34629559 +0000 UTC m=+1260.872760276" watchObservedRunningTime="2026-02-19 10:29:32.352074068 +0000 UTC m=+1260.878538754" Feb 19 10:29:33 crc kubenswrapper[4811]: I0219 10:29:33.285943 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:29:33 crc kubenswrapper[4811]: I0219 10:29:33.296894 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 10:29:33 crc kubenswrapper[4811]: I0219 10:29:33.384205 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-zm5wz" Feb 19 10:29:34 crc kubenswrapper[4811]: I0219 10:29:34.420847 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9ff89198-de83-43ec-bff4-d2dcf9311438","Type":"ContainerStarted","Data":"02a668dc36856a1242188c45077d7a58a06765b3c43edb394056337a3f0fa747"} Feb 19 10:29:34 crc kubenswrapper[4811]: I0219 10:29:34.422598 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae","Type":"ContainerStarted","Data":"27f1dcc0efe3cae42b79e032a8875623167e03a3fd7b27bb00b634a8112e5f0c"} Feb 19 10:29:34 crc kubenswrapper[4811]: I0219 10:29:34.426948 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5b84f775-8294-4ec6-8c83-1c6c3fc11604","Type":"ContainerStarted","Data":"896bcb8e7d4cc5d48b57d535d5a9d69f875f64ef8ef2192834e7c9d243c1a17e"} Feb 19 10:29:34 crc kubenswrapper[4811]: I0219 10:29:34.427007 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="5b84f775-8294-4ec6-8c83-1c6c3fc11604" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://896bcb8e7d4cc5d48b57d535d5a9d69f875f64ef8ef2192834e7c9d243c1a17e" gracePeriod=30 Feb 19 10:29:34 crc kubenswrapper[4811]: I0219 10:29:34.436482 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"086be806-4a18-4001-baae-5d3a05b4ed2a","Type":"ContainerStarted","Data":"e92834946d2d7421ec0755e6ed699a87e22dc84ea665a71967ebd27b38ede6c2"} Feb 19 10:29:34 crc kubenswrapper[4811]: I0219 10:29:34.469958 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.957162044 podStartE2EDuration="5.469926597s" podCreationTimestamp="2026-02-19 10:29:29 +0000 UTC" firstStartedPulling="2026-02-19 10:29:30.442512838 +0000 UTC m=+1258.968977524" lastFinishedPulling="2026-02-19 10:29:33.955277391 +0000 UTC m=+1262.481742077" observedRunningTime="2026-02-19 10:29:34.446892749 +0000 UTC m=+1262.973357435" watchObservedRunningTime="2026-02-19 10:29:34.469926597 +0000 UTC m=+1262.996391283" Feb 19 10:29:34 crc kubenswrapper[4811]: I0219 10:29:34.484565 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.592883904 podStartE2EDuration="5.484543071s" podCreationTimestamp="2026-02-19 10:29:29 +0000 UTC" firstStartedPulling="2026-02-19 10:29:31.066011575 +0000 UTC m=+1259.592476261" lastFinishedPulling="2026-02-19 10:29:33.957670742 +0000 UTC m=+1262.484135428" observedRunningTime="2026-02-19 10:29:34.478976008 +0000 UTC m=+1263.005440694" watchObservedRunningTime="2026-02-19 10:29:34.484543071 +0000 UTC m=+1263.011007757" Feb 19 10:29:34 crc kubenswrapper[4811]: I0219 10:29:34.664624 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 10:29:35 crc kubenswrapper[4811]: I0219 10:29:35.211727 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:29:35 crc kubenswrapper[4811]: I0219 10:29:35.409494 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-766dfcbd56-lqbrt" podUID="4ab45e9b-6cfc-48a3-b603-f47d47bbd510" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.144:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.144:8443: connect: connection refused" Feb 19 10:29:35 crc kubenswrapper[4811]: I0219 10:29:35.409845 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-766dfcbd56-lqbrt" Feb 19 10:29:35 crc kubenswrapper[4811]: I0219 10:29:35.448139 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"086be806-4a18-4001-baae-5d3a05b4ed2a","Type":"ContainerStarted","Data":"3d3eee6647f3c59badf0c31e12e1e4f3738f9de77de8af2c3f12a896ffbd57d5"} Feb 19 10:29:35 crc kubenswrapper[4811]: I0219 10:29:35.451757 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9ff89198-de83-43ec-bff4-d2dcf9311438","Type":"ContainerStarted","Data":"158a0e08386623604cf376de6b7c9435082e183b861a8af51ed86dd651161d58"} Feb 19 10:29:35 crc kubenswrapper[4811]: I0219 10:29:35.452227 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9ff89198-de83-43ec-bff4-d2dcf9311438" containerName="nova-metadata-log" containerID="cri-o://02a668dc36856a1242188c45077d7a58a06765b3c43edb394056337a3f0fa747" gracePeriod=30 Feb 19 10:29:35 crc kubenswrapper[4811]: I0219 10:29:35.452273 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9ff89198-de83-43ec-bff4-d2dcf9311438" containerName="nova-metadata-metadata" containerID="cri-o://158a0e08386623604cf376de6b7c9435082e183b861a8af51ed86dd651161d58" gracePeriod=30 Feb 19 10:29:35 crc kubenswrapper[4811]: I0219 10:29:35.486676 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.570113187 podStartE2EDuration="6.486639699s" podCreationTimestamp="2026-02-19 10:29:29 +0000 UTC" firstStartedPulling="2026-02-19 10:29:31.039550759 +0000 UTC m=+1259.566015445" lastFinishedPulling="2026-02-19 10:29:33.956077271 +0000 UTC m=+1262.482541957" observedRunningTime="2026-02-19 10:29:35.481990161 +0000 UTC m=+1264.008454867" watchObservedRunningTime="2026-02-19 10:29:35.486639699 +0000 UTC m=+1264.013104385" Feb 19 10:29:35 crc kubenswrapper[4811]: I0219 10:29:35.532558 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.085831057 podStartE2EDuration="6.532537192s" podCreationTimestamp="2026-02-19 10:29:29 +0000 UTC" firstStartedPulling="2026-02-19 10:29:30.511302825 +0000 UTC m=+1259.037767511" lastFinishedPulling="2026-02-19 10:29:33.95800896 +0000 UTC m=+1262.484473646" observedRunningTime="2026-02-19 10:29:35.504039224 +0000 UTC m=+1264.030503920" watchObservedRunningTime="2026-02-19 10:29:35.532537192 +0000 UTC m=+1264.059001878" Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.402291 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.465944 4811 generic.go:334] "Generic (PLEG): container finished" podID="9ff89198-de83-43ec-bff4-d2dcf9311438" containerID="158a0e08386623604cf376de6b7c9435082e183b861a8af51ed86dd651161d58" exitCode=0 Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.465987 4811 generic.go:334] "Generic (PLEG): container finished" podID="9ff89198-de83-43ec-bff4-d2dcf9311438" containerID="02a668dc36856a1242188c45077d7a58a06765b3c43edb394056337a3f0fa747" exitCode=143 Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.466117 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.466175 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9ff89198-de83-43ec-bff4-d2dcf9311438","Type":"ContainerDied","Data":"158a0e08386623604cf376de6b7c9435082e183b861a8af51ed86dd651161d58"} Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.466215 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9ff89198-de83-43ec-bff4-d2dcf9311438","Type":"ContainerDied","Data":"02a668dc36856a1242188c45077d7a58a06765b3c43edb394056337a3f0fa747"} Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.466230 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9ff89198-de83-43ec-bff4-d2dcf9311438","Type":"ContainerDied","Data":"186cf1fe95d61c06d62edbc6626cfebfe61b24a588052c15990c75557f4bfbbb"} Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.466249 4811 scope.go:117] "RemoveContainer" containerID="158a0e08386623604cf376de6b7c9435082e183b861a8af51ed86dd651161d58" Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.468873 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ff89198-de83-43ec-bff4-d2dcf9311438-logs\") pod \"9ff89198-de83-43ec-bff4-d2dcf9311438\" (UID: \"9ff89198-de83-43ec-bff4-d2dcf9311438\") " Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.468955 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ff89198-de83-43ec-bff4-d2dcf9311438-config-data\") pod \"9ff89198-de83-43ec-bff4-d2dcf9311438\" (UID: \"9ff89198-de83-43ec-bff4-d2dcf9311438\") " Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.469006 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff89198-de83-43ec-bff4-d2dcf9311438-combined-ca-bundle\") pod \"9ff89198-de83-43ec-bff4-d2dcf9311438\" (UID: \"9ff89198-de83-43ec-bff4-d2dcf9311438\") " Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.469105 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9579\" (UniqueName: \"kubernetes.io/projected/9ff89198-de83-43ec-bff4-d2dcf9311438-kube-api-access-n9579\") pod \"9ff89198-de83-43ec-bff4-d2dcf9311438\" (UID: \"9ff89198-de83-43ec-bff4-d2dcf9311438\") " Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.471042 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ff89198-de83-43ec-bff4-d2dcf9311438-logs" (OuterVolumeSpecName: "logs") pod "9ff89198-de83-43ec-bff4-d2dcf9311438" (UID: "9ff89198-de83-43ec-bff4-d2dcf9311438"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.477054 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ff89198-de83-43ec-bff4-d2dcf9311438-kube-api-access-n9579" (OuterVolumeSpecName: "kube-api-access-n9579") pod "9ff89198-de83-43ec-bff4-d2dcf9311438" (UID: "9ff89198-de83-43ec-bff4-d2dcf9311438"). InnerVolumeSpecName "kube-api-access-n9579". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.517270 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ff89198-de83-43ec-bff4-d2dcf9311438-config-data" (OuterVolumeSpecName: "config-data") pod "9ff89198-de83-43ec-bff4-d2dcf9311438" (UID: "9ff89198-de83-43ec-bff4-d2dcf9311438"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.519674 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ff89198-de83-43ec-bff4-d2dcf9311438-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ff89198-de83-43ec-bff4-d2dcf9311438" (UID: "9ff89198-de83-43ec-bff4-d2dcf9311438"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.572134 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ff89198-de83-43ec-bff4-d2dcf9311438-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.572170 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ff89198-de83-43ec-bff4-d2dcf9311438-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.572181 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff89198-de83-43ec-bff4-d2dcf9311438-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.572193 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9579\" (UniqueName: \"kubernetes.io/projected/9ff89198-de83-43ec-bff4-d2dcf9311438-kube-api-access-n9579\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.601940 4811 scope.go:117] "RemoveContainer" containerID="02a668dc36856a1242188c45077d7a58a06765b3c43edb394056337a3f0fa747" Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.624803 4811 scope.go:117] "RemoveContainer" containerID="158a0e08386623604cf376de6b7c9435082e183b861a8af51ed86dd651161d58" Feb 19 10:29:36 crc kubenswrapper[4811]: E0219 10:29:36.625739 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"158a0e08386623604cf376de6b7c9435082e183b861a8af51ed86dd651161d58\": container with ID starting with 158a0e08386623604cf376de6b7c9435082e183b861a8af51ed86dd651161d58 not found: ID does not exist" containerID="158a0e08386623604cf376de6b7c9435082e183b861a8af51ed86dd651161d58" Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.625980 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"158a0e08386623604cf376de6b7c9435082e183b861a8af51ed86dd651161d58"} err="failed to get container status \"158a0e08386623604cf376de6b7c9435082e183b861a8af51ed86dd651161d58\": rpc error: code = NotFound desc = could not find container \"158a0e08386623604cf376de6b7c9435082e183b861a8af51ed86dd651161d58\": container with ID starting with 158a0e08386623604cf376de6b7c9435082e183b861a8af51ed86dd651161d58 not found: ID does not exist" Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.626077 4811 scope.go:117] "RemoveContainer" containerID="02a668dc36856a1242188c45077d7a58a06765b3c43edb394056337a3f0fa747" Feb 19 10:29:36 crc kubenswrapper[4811]: E0219 10:29:36.626848 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02a668dc36856a1242188c45077d7a58a06765b3c43edb394056337a3f0fa747\": container with ID starting with 02a668dc36856a1242188c45077d7a58a06765b3c43edb394056337a3f0fa747 not found: ID does not exist" containerID="02a668dc36856a1242188c45077d7a58a06765b3c43edb394056337a3f0fa747" Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.626877 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02a668dc36856a1242188c45077d7a58a06765b3c43edb394056337a3f0fa747"} err="failed to get container status \"02a668dc36856a1242188c45077d7a58a06765b3c43edb394056337a3f0fa747\": rpc error: code = NotFound desc = could not find container \"02a668dc36856a1242188c45077d7a58a06765b3c43edb394056337a3f0fa747\": container with ID starting with 02a668dc36856a1242188c45077d7a58a06765b3c43edb394056337a3f0fa747 not found: ID does not exist" Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.626894 4811 scope.go:117] "RemoveContainer" containerID="158a0e08386623604cf376de6b7c9435082e183b861a8af51ed86dd651161d58" Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.627346 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"158a0e08386623604cf376de6b7c9435082e183b861a8af51ed86dd651161d58"} err="failed to get container status \"158a0e08386623604cf376de6b7c9435082e183b861a8af51ed86dd651161d58\": rpc error: code = NotFound desc = could not find container \"158a0e08386623604cf376de6b7c9435082e183b861a8af51ed86dd651161d58\": container with ID starting with 158a0e08386623604cf376de6b7c9435082e183b861a8af51ed86dd651161d58 not found: ID does not exist" Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.627377 4811 scope.go:117] "RemoveContainer" containerID="02a668dc36856a1242188c45077d7a58a06765b3c43edb394056337a3f0fa747" Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.627642 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02a668dc36856a1242188c45077d7a58a06765b3c43edb394056337a3f0fa747"} err="failed to get container status \"02a668dc36856a1242188c45077d7a58a06765b3c43edb394056337a3f0fa747\": rpc error: code = NotFound desc = could not find container \"02a668dc36856a1242188c45077d7a58a06765b3c43edb394056337a3f0fa747\": container with ID starting with 02a668dc36856a1242188c45077d7a58a06765b3c43edb394056337a3f0fa747 not found: ID does not exist" Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.795616 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.806936 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.823138 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:29:36 crc kubenswrapper[4811]: E0219 10:29:36.823657 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ff89198-de83-43ec-bff4-d2dcf9311438" containerName="nova-metadata-metadata" Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.823680 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ff89198-de83-43ec-bff4-d2dcf9311438" containerName="nova-metadata-metadata" Feb 19 10:29:36 crc kubenswrapper[4811]: E0219 10:29:36.823714 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ff89198-de83-43ec-bff4-d2dcf9311438" containerName="nova-metadata-log" Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.823724 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ff89198-de83-43ec-bff4-d2dcf9311438" containerName="nova-metadata-log" Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.823960 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ff89198-de83-43ec-bff4-d2dcf9311438" containerName="nova-metadata-log" Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.823997 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ff89198-de83-43ec-bff4-d2dcf9311438" containerName="nova-metadata-metadata" Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.825348 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.828019 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.828378 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.842887 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.982073 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3286173-ce4c-4d45-9b0e-ce88b5063d93-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e3286173-ce4c-4d45-9b0e-ce88b5063d93\") " pod="openstack/nova-metadata-0" Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.982176 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3286173-ce4c-4d45-9b0e-ce88b5063d93-config-data\") pod \"nova-metadata-0\" (UID: \"e3286173-ce4c-4d45-9b0e-ce88b5063d93\") " pod="openstack/nova-metadata-0" Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.982205 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98456\" (UniqueName: \"kubernetes.io/projected/e3286173-ce4c-4d45-9b0e-ce88b5063d93-kube-api-access-98456\") pod \"nova-metadata-0\" (UID: \"e3286173-ce4c-4d45-9b0e-ce88b5063d93\") " pod="openstack/nova-metadata-0" Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.982239 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3286173-ce4c-4d45-9b0e-ce88b5063d93-logs\") pod \"nova-metadata-0\" (UID: \"e3286173-ce4c-4d45-9b0e-ce88b5063d93\") " pod="openstack/nova-metadata-0" Feb 19 10:29:36 crc kubenswrapper[4811]: I0219 10:29:36.982295 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3286173-ce4c-4d45-9b0e-ce88b5063d93-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e3286173-ce4c-4d45-9b0e-ce88b5063d93\") " pod="openstack/nova-metadata-0" Feb 19 10:29:37 crc kubenswrapper[4811]: I0219 10:29:37.083497 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3286173-ce4c-4d45-9b0e-ce88b5063d93-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e3286173-ce4c-4d45-9b0e-ce88b5063d93\") " pod="openstack/nova-metadata-0" Feb 19 10:29:37 crc kubenswrapper[4811]: I0219 10:29:37.083625 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3286173-ce4c-4d45-9b0e-ce88b5063d93-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e3286173-ce4c-4d45-9b0e-ce88b5063d93\") " pod="openstack/nova-metadata-0" Feb 19 10:29:37 crc kubenswrapper[4811]: I0219 10:29:37.083702 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3286173-ce4c-4d45-9b0e-ce88b5063d93-config-data\") pod \"nova-metadata-0\" (UID: \"e3286173-ce4c-4d45-9b0e-ce88b5063d93\") " pod="openstack/nova-metadata-0" Feb 19 10:29:37 crc kubenswrapper[4811]: I0219 10:29:37.083723 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98456\" (UniqueName: \"kubernetes.io/projected/e3286173-ce4c-4d45-9b0e-ce88b5063d93-kube-api-access-98456\") pod \"nova-metadata-0\" (UID: \"e3286173-ce4c-4d45-9b0e-ce88b5063d93\") " pod="openstack/nova-metadata-0" Feb 19 10:29:37 crc kubenswrapper[4811]: I0219 10:29:37.083759 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3286173-ce4c-4d45-9b0e-ce88b5063d93-logs\") pod \"nova-metadata-0\" (UID: \"e3286173-ce4c-4d45-9b0e-ce88b5063d93\") " pod="openstack/nova-metadata-0" Feb 19 10:29:37 crc kubenswrapper[4811]: I0219 10:29:37.085914 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3286173-ce4c-4d45-9b0e-ce88b5063d93-logs\") pod \"nova-metadata-0\" (UID: \"e3286173-ce4c-4d45-9b0e-ce88b5063d93\") " pod="openstack/nova-metadata-0" Feb 19 10:29:37 crc kubenswrapper[4811]: I0219 10:29:37.088971 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3286173-ce4c-4d45-9b0e-ce88b5063d93-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e3286173-ce4c-4d45-9b0e-ce88b5063d93\") " pod="openstack/nova-metadata-0" Feb 19 10:29:37 crc kubenswrapper[4811]: I0219 10:29:37.089280 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3286173-ce4c-4d45-9b0e-ce88b5063d93-config-data\") pod \"nova-metadata-0\" (UID: \"e3286173-ce4c-4d45-9b0e-ce88b5063d93\") " pod="openstack/nova-metadata-0" Feb 19 10:29:37 crc kubenswrapper[4811]: I0219 10:29:37.089658 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3286173-ce4c-4d45-9b0e-ce88b5063d93-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e3286173-ce4c-4d45-9b0e-ce88b5063d93\") " pod="openstack/nova-metadata-0" Feb 19 10:29:37 crc kubenswrapper[4811]: I0219 10:29:37.101164 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98456\" (UniqueName: \"kubernetes.io/projected/e3286173-ce4c-4d45-9b0e-ce88b5063d93-kube-api-access-98456\") pod \"nova-metadata-0\" (UID: \"e3286173-ce4c-4d45-9b0e-ce88b5063d93\") " pod="openstack/nova-metadata-0" Feb 19 10:29:37 crc kubenswrapper[4811]: I0219 10:29:37.165980 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:29:37 crc kubenswrapper[4811]: I0219 10:29:37.617215 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:29:38 crc kubenswrapper[4811]: I0219 10:29:38.493388 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e3286173-ce4c-4d45-9b0e-ce88b5063d93","Type":"ContainerStarted","Data":"daf2f850c4c06a9e1eaafafe207ef9ce7c86fe74344ba17061e8c2041865cc44"} Feb 19 10:29:38 crc kubenswrapper[4811]: I0219 10:29:38.493810 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e3286173-ce4c-4d45-9b0e-ce88b5063d93","Type":"ContainerStarted","Data":"10bbf1c6db96a2abfaa29e070738505a45621dc64ff33ffc7191eb28833e888d"} Feb 19 10:29:38 crc kubenswrapper[4811]: I0219 10:29:38.493824 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e3286173-ce4c-4d45-9b0e-ce88b5063d93","Type":"ContainerStarted","Data":"e42db80604c71c36052b047252f070a481de0c9e7f146b0b9d29ec163aa40b51"} Feb 19 10:29:38 crc kubenswrapper[4811]: I0219 10:29:38.514822 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.514799953 podStartE2EDuration="2.514799953s" podCreationTimestamp="2026-02-19 10:29:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:29:38.513254184 +0000 UTC m=+1267.039718880" watchObservedRunningTime="2026-02-19 10:29:38.514799953 +0000 UTC m=+1267.041264639" Feb 19 10:29:38 crc kubenswrapper[4811]: I0219 10:29:38.596978 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ff89198-de83-43ec-bff4-d2dcf9311438" path="/var/lib/kubelet/pods/9ff89198-de83-43ec-bff4-d2dcf9311438/volumes" Feb 19 10:29:39 crc kubenswrapper[4811]: I0219 10:29:39.510421 4811 generic.go:334] "Generic (PLEG): container finished" podID="2207a190-4ccd-4560-890a-b9d6c5255f6a" containerID="d44b7a74842106a257153acf8d61b6e2d23f7b8f97742bd3dafb8b6adee6af59" exitCode=0 Feb 19 10:29:39 crc kubenswrapper[4811]: I0219 10:29:39.510557 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-f4kvz" event={"ID":"2207a190-4ccd-4560-890a-b9d6c5255f6a","Type":"ContainerDied","Data":"d44b7a74842106a257153acf8d61b6e2d23f7b8f97742bd3dafb8b6adee6af59"} Feb 19 10:29:39 crc kubenswrapper[4811]: I0219 10:29:39.683568 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 10:29:39 crc kubenswrapper[4811]: I0219 10:29:39.712326 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 10:29:39 crc kubenswrapper[4811]: I0219 10:29:39.809731 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-zm5wz" Feb 19 10:29:39 crc kubenswrapper[4811]: I0219 10:29:39.882933 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-l669h"] Feb 19 10:29:39 crc kubenswrapper[4811]: I0219 10:29:39.883274 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-l669h" podUID="cda0926f-a0e7-488c-a0a4-6404d2720495" containerName="dnsmasq-dns" containerID="cri-o://51223e430eb4ec34dd516f700de9104a9c7f63d43357c8d74cd097a88adfa28a" gracePeriod=10 Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.131945 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.131996 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.422162 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-l669h" Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.528210 4811 generic.go:334] "Generic (PLEG): container finished" podID="4ab45e9b-6cfc-48a3-b603-f47d47bbd510" containerID="73cc025a1567957d2afc71121863816e1f341f3be74bf2a4abb8ce0ce5ccf057" exitCode=137 Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.528304 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-766dfcbd56-lqbrt" event={"ID":"4ab45e9b-6cfc-48a3-b603-f47d47bbd510","Type":"ContainerDied","Data":"73cc025a1567957d2afc71121863816e1f341f3be74bf2a4abb8ce0ce5ccf057"} Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.530128 4811 generic.go:334] "Generic (PLEG): container finished" podID="1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04" containerID="c7a0b025e6f7bcbac0c12794f24bf43326935fcfdae32beceebac78261f1e559" exitCode=0 Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.530189 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-2472l" event={"ID":"1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04","Type":"ContainerDied","Data":"c7a0b025e6f7bcbac0c12794f24bf43326935fcfdae32beceebac78261f1e559"} Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.542917 4811 generic.go:334] "Generic (PLEG): container finished" podID="cda0926f-a0e7-488c-a0a4-6404d2720495" containerID="51223e430eb4ec34dd516f700de9104a9c7f63d43357c8d74cd097a88adfa28a" exitCode=0 Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.543729 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-l669h" event={"ID":"cda0926f-a0e7-488c-a0a4-6404d2720495","Type":"ContainerDied","Data":"51223e430eb4ec34dd516f700de9104a9c7f63d43357c8d74cd097a88adfa28a"} Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.543781 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-l669h" event={"ID":"cda0926f-a0e7-488c-a0a4-6404d2720495","Type":"ContainerDied","Data":"fa9ba624655c71844c854362824d1e85b02df6adf228a26cac2801dc95bb0e00"} Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.543780 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-l669h" Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.543802 4811 scope.go:117] "RemoveContainer" containerID="51223e430eb4ec34dd516f700de9104a9c7f63d43357c8d74cd097a88adfa28a" Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.555626 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cda0926f-a0e7-488c-a0a4-6404d2720495-dns-swift-storage-0\") pod \"cda0926f-a0e7-488c-a0a4-6404d2720495\" (UID: \"cda0926f-a0e7-488c-a0a4-6404d2720495\") " Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.555769 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cda0926f-a0e7-488c-a0a4-6404d2720495-config\") pod \"cda0926f-a0e7-488c-a0a4-6404d2720495\" (UID: \"cda0926f-a0e7-488c-a0a4-6404d2720495\") " Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.555824 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cda0926f-a0e7-488c-a0a4-6404d2720495-ovsdbserver-sb\") pod \"cda0926f-a0e7-488c-a0a4-6404d2720495\" (UID: \"cda0926f-a0e7-488c-a0a4-6404d2720495\") " Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.555900 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cda0926f-a0e7-488c-a0a4-6404d2720495-ovsdbserver-nb\") pod \"cda0926f-a0e7-488c-a0a4-6404d2720495\" (UID: \"cda0926f-a0e7-488c-a0a4-6404d2720495\") " Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.556036 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cda0926f-a0e7-488c-a0a4-6404d2720495-dns-svc\") pod \"cda0926f-a0e7-488c-a0a4-6404d2720495\" (UID: \"cda0926f-a0e7-488c-a0a4-6404d2720495\") " Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.556090 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj4z5\" (UniqueName: \"kubernetes.io/projected/cda0926f-a0e7-488c-a0a4-6404d2720495-kube-api-access-rj4z5\") pod \"cda0926f-a0e7-488c-a0a4-6404d2720495\" (UID: \"cda0926f-a0e7-488c-a0a4-6404d2720495\") " Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.572377 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cda0926f-a0e7-488c-a0a4-6404d2720495-kube-api-access-rj4z5" (OuterVolumeSpecName: "kube-api-access-rj4z5") pod "cda0926f-a0e7-488c-a0a4-6404d2720495" (UID: "cda0926f-a0e7-488c-a0a4-6404d2720495"). InnerVolumeSpecName "kube-api-access-rj4z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.573368 4811 scope.go:117] "RemoveContainer" containerID="cf5534ef1905fc7c871595561452bef8c0b08566b95dc9ed9674616deb83c09f" Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.618247 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cda0926f-a0e7-488c-a0a4-6404d2720495-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cda0926f-a0e7-488c-a0a4-6404d2720495" (UID: "cda0926f-a0e7-488c-a0a4-6404d2720495"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.660047 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cda0926f-a0e7-488c-a0a4-6404d2720495-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cda0926f-a0e7-488c-a0a4-6404d2720495" (UID: "cda0926f-a0e7-488c-a0a4-6404d2720495"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.662712 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cda0926f-a0e7-488c-a0a4-6404d2720495-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.663247 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj4z5\" (UniqueName: \"kubernetes.io/projected/cda0926f-a0e7-488c-a0a4-6404d2720495-kube-api-access-rj4z5\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.663479 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cda0926f-a0e7-488c-a0a4-6404d2720495-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.665824 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cda0926f-a0e7-488c-a0a4-6404d2720495-config" (OuterVolumeSpecName: "config") pod "cda0926f-a0e7-488c-a0a4-6404d2720495" (UID: "cda0926f-a0e7-488c-a0a4-6404d2720495"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.684182 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cda0926f-a0e7-488c-a0a4-6404d2720495-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cda0926f-a0e7-488c-a0a4-6404d2720495" (UID: "cda0926f-a0e7-488c-a0a4-6404d2720495"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.700592 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cda0926f-a0e7-488c-a0a4-6404d2720495-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cda0926f-a0e7-488c-a0a4-6404d2720495" (UID: "cda0926f-a0e7-488c-a0a4-6404d2720495"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.749946 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.773226 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cda0926f-a0e7-488c-a0a4-6404d2720495-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.773260 4811 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cda0926f-a0e7-488c-a0a4-6404d2720495-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.773273 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cda0926f-a0e7-488c-a0a4-6404d2720495-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.784777 4811 scope.go:117] "RemoveContainer" containerID="51223e430eb4ec34dd516f700de9104a9c7f63d43357c8d74cd097a88adfa28a" Feb 19 10:29:40 crc kubenswrapper[4811]: E0219 10:29:40.786875 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51223e430eb4ec34dd516f700de9104a9c7f63d43357c8d74cd097a88adfa28a\": container with ID starting with 51223e430eb4ec34dd516f700de9104a9c7f63d43357c8d74cd097a88adfa28a not found: ID does not exist" containerID="51223e430eb4ec34dd516f700de9104a9c7f63d43357c8d74cd097a88adfa28a" Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.786961 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51223e430eb4ec34dd516f700de9104a9c7f63d43357c8d74cd097a88adfa28a"} err="failed to get container status \"51223e430eb4ec34dd516f700de9104a9c7f63d43357c8d74cd097a88adfa28a\": rpc error: code = NotFound desc = could not find container \"51223e430eb4ec34dd516f700de9104a9c7f63d43357c8d74cd097a88adfa28a\": container with ID starting with 51223e430eb4ec34dd516f700de9104a9c7f63d43357c8d74cd097a88adfa28a not found: ID does not exist" Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.787001 4811 scope.go:117] "RemoveContainer" containerID="cf5534ef1905fc7c871595561452bef8c0b08566b95dc9ed9674616deb83c09f" Feb 19 10:29:40 crc kubenswrapper[4811]: E0219 10:29:40.787686 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf5534ef1905fc7c871595561452bef8c0b08566b95dc9ed9674616deb83c09f\": container with ID starting with cf5534ef1905fc7c871595561452bef8c0b08566b95dc9ed9674616deb83c09f not found: ID does not exist" containerID="cf5534ef1905fc7c871595561452bef8c0b08566b95dc9ed9674616deb83c09f" Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.787731 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf5534ef1905fc7c871595561452bef8c0b08566b95dc9ed9674616deb83c09f"} err="failed to get container status \"cf5534ef1905fc7c871595561452bef8c0b08566b95dc9ed9674616deb83c09f\": rpc error: code = NotFound desc = could not find container \"cf5534ef1905fc7c871595561452bef8c0b08566b95dc9ed9674616deb83c09f\": container with ID starting with cf5534ef1905fc7c871595561452bef8c0b08566b95dc9ed9674616deb83c09f not found: ID does not exist" Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.787783 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-766dfcbd56-lqbrt" Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.875114 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-horizon-tls-certs\") pod \"4ab45e9b-6cfc-48a3-b603-f47d47bbd510\" (UID: \"4ab45e9b-6cfc-48a3-b603-f47d47bbd510\") " Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.875200 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55wxc\" (UniqueName: \"kubernetes.io/projected/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-kube-api-access-55wxc\") pod \"4ab45e9b-6cfc-48a3-b603-f47d47bbd510\" (UID: \"4ab45e9b-6cfc-48a3-b603-f47d47bbd510\") " Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.875379 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-scripts\") pod \"4ab45e9b-6cfc-48a3-b603-f47d47bbd510\" (UID: \"4ab45e9b-6cfc-48a3-b603-f47d47bbd510\") " Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.875403 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-logs\") pod \"4ab45e9b-6cfc-48a3-b603-f47d47bbd510\" (UID: \"4ab45e9b-6cfc-48a3-b603-f47d47bbd510\") " Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.875470 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-combined-ca-bundle\") pod \"4ab45e9b-6cfc-48a3-b603-f47d47bbd510\" (UID: \"4ab45e9b-6cfc-48a3-b603-f47d47bbd510\") " Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.875505 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-config-data\") pod \"4ab45e9b-6cfc-48a3-b603-f47d47bbd510\" (UID: \"4ab45e9b-6cfc-48a3-b603-f47d47bbd510\") " Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.875555 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-horizon-secret-key\") pod \"4ab45e9b-6cfc-48a3-b603-f47d47bbd510\" (UID: \"4ab45e9b-6cfc-48a3-b603-f47d47bbd510\") " Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.881070 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "4ab45e9b-6cfc-48a3-b603-f47d47bbd510" (UID: "4ab45e9b-6cfc-48a3-b603-f47d47bbd510"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.881140 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-logs" (OuterVolumeSpecName: "logs") pod "4ab45e9b-6cfc-48a3-b603-f47d47bbd510" (UID: "4ab45e9b-6cfc-48a3-b603-f47d47bbd510"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.887045 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-kube-api-access-55wxc" (OuterVolumeSpecName: "kube-api-access-55wxc") pod "4ab45e9b-6cfc-48a3-b603-f47d47bbd510" (UID: "4ab45e9b-6cfc-48a3-b603-f47d47bbd510"). InnerVolumeSpecName "kube-api-access-55wxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.892579 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-l669h"] Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.904743 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-l669h"] Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.921950 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-scripts" (OuterVolumeSpecName: "scripts") pod "4ab45e9b-6cfc-48a3-b603-f47d47bbd510" (UID: "4ab45e9b-6cfc-48a3-b603-f47d47bbd510"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.927195 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-config-data" (OuterVolumeSpecName: "config-data") pod "4ab45e9b-6cfc-48a3-b603-f47d47bbd510" (UID: "4ab45e9b-6cfc-48a3-b603-f47d47bbd510"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.928639 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ab45e9b-6cfc-48a3-b603-f47d47bbd510" (UID: "4ab45e9b-6cfc-48a3-b603-f47d47bbd510"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.953580 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-f4kvz" Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.968281 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "4ab45e9b-6cfc-48a3-b603-f47d47bbd510" (UID: "4ab45e9b-6cfc-48a3-b603-f47d47bbd510"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.978760 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55wxc\" (UniqueName: \"kubernetes.io/projected/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-kube-api-access-55wxc\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.978813 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.978829 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.978843 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.978855 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.978868 4811 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:40 crc kubenswrapper[4811]: I0219 10:29:40.978879 4811 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ab45e9b-6cfc-48a3-b603-f47d47bbd510-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:41 crc kubenswrapper[4811]: I0219 10:29:41.080903 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2207a190-4ccd-4560-890a-b9d6c5255f6a-scripts\") pod \"2207a190-4ccd-4560-890a-b9d6c5255f6a\" (UID: \"2207a190-4ccd-4560-890a-b9d6c5255f6a\") " Feb 19 10:29:41 crc kubenswrapper[4811]: I0219 10:29:41.080976 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2207a190-4ccd-4560-890a-b9d6c5255f6a-config-data\") pod \"2207a190-4ccd-4560-890a-b9d6c5255f6a\" (UID: \"2207a190-4ccd-4560-890a-b9d6c5255f6a\") " Feb 19 10:29:41 crc kubenswrapper[4811]: I0219 10:29:41.081004 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2207a190-4ccd-4560-890a-b9d6c5255f6a-combined-ca-bundle\") pod \"2207a190-4ccd-4560-890a-b9d6c5255f6a\" (UID: \"2207a190-4ccd-4560-890a-b9d6c5255f6a\") " Feb 19 10:29:41 crc kubenswrapper[4811]: I0219 10:29:41.081219 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77kfg\" (UniqueName: \"kubernetes.io/projected/2207a190-4ccd-4560-890a-b9d6c5255f6a-kube-api-access-77kfg\") pod \"2207a190-4ccd-4560-890a-b9d6c5255f6a\" (UID: \"2207a190-4ccd-4560-890a-b9d6c5255f6a\") " Feb 19 10:29:41 crc kubenswrapper[4811]: I0219 10:29:41.095149 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2207a190-4ccd-4560-890a-b9d6c5255f6a-kube-api-access-77kfg" (OuterVolumeSpecName: "kube-api-access-77kfg") pod "2207a190-4ccd-4560-890a-b9d6c5255f6a" (UID: "2207a190-4ccd-4560-890a-b9d6c5255f6a"). InnerVolumeSpecName "kube-api-access-77kfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:29:41 crc kubenswrapper[4811]: I0219 10:29:41.095149 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2207a190-4ccd-4560-890a-b9d6c5255f6a-scripts" (OuterVolumeSpecName: "scripts") pod "2207a190-4ccd-4560-890a-b9d6c5255f6a" (UID: "2207a190-4ccd-4560-890a-b9d6c5255f6a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:41 crc kubenswrapper[4811]: I0219 10:29:41.114302 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2207a190-4ccd-4560-890a-b9d6c5255f6a-config-data" (OuterVolumeSpecName: "config-data") pod "2207a190-4ccd-4560-890a-b9d6c5255f6a" (UID: "2207a190-4ccd-4560-890a-b9d6c5255f6a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:41 crc kubenswrapper[4811]: I0219 10:29:41.116339 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2207a190-4ccd-4560-890a-b9d6c5255f6a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2207a190-4ccd-4560-890a-b9d6c5255f6a" (UID: "2207a190-4ccd-4560-890a-b9d6c5255f6a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:41 crc kubenswrapper[4811]: I0219 10:29:41.184031 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2207a190-4ccd-4560-890a-b9d6c5255f6a-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:41 crc kubenswrapper[4811]: I0219 10:29:41.184151 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2207a190-4ccd-4560-890a-b9d6c5255f6a-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:41 crc kubenswrapper[4811]: I0219 10:29:41.184166 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2207a190-4ccd-4560-890a-b9d6c5255f6a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:41 crc kubenswrapper[4811]: I0219 10:29:41.184185 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77kfg\" (UniqueName: \"kubernetes.io/projected/2207a190-4ccd-4560-890a-b9d6c5255f6a-kube-api-access-77kfg\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:41 crc kubenswrapper[4811]: I0219 10:29:41.213693 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="086be806-4a18-4001-baae-5d3a05b4ed2a" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 10:29:41 crc kubenswrapper[4811]: I0219 10:29:41.213744 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="086be806-4a18-4001-baae-5d3a05b4ed2a" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 10:29:41 crc kubenswrapper[4811]: I0219 10:29:41.557295 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-f4kvz" event={"ID":"2207a190-4ccd-4560-890a-b9d6c5255f6a","Type":"ContainerDied","Data":"d33bba698ebe49bd1697ed95fc886f90782adc726e68434c16c0ccf19987e640"} Feb 19 10:29:41 crc kubenswrapper[4811]: I0219 10:29:41.557352 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d33bba698ebe49bd1697ed95fc886f90782adc726e68434c16c0ccf19987e640" Feb 19 10:29:41 crc kubenswrapper[4811]: I0219 10:29:41.557321 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-f4kvz" Feb 19 10:29:41 crc kubenswrapper[4811]: I0219 10:29:41.560386 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-766dfcbd56-lqbrt" event={"ID":"4ab45e9b-6cfc-48a3-b603-f47d47bbd510","Type":"ContainerDied","Data":"6130ebc3a1b7ccf94b7c9699df9cf15b0e03b26cbeecfcc5bd6d0b4b43ad391a"} Feb 19 10:29:41 crc kubenswrapper[4811]: I0219 10:29:41.560480 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-766dfcbd56-lqbrt" Feb 19 10:29:41 crc kubenswrapper[4811]: I0219 10:29:41.560486 4811 scope.go:117] "RemoveContainer" containerID="bfc48b984c2ea255b68016b29a536c04abbfce299b7e71e4dc30275cf1b30bdd" Feb 19 10:29:41 crc kubenswrapper[4811]: I0219 10:29:41.626678 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-766dfcbd56-lqbrt"] Feb 19 10:29:41 crc kubenswrapper[4811]: I0219 10:29:41.633872 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-766dfcbd56-lqbrt"] Feb 19 10:29:41 crc kubenswrapper[4811]: I0219 10:29:41.744108 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:29:41 crc kubenswrapper[4811]: I0219 10:29:41.744579 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="086be806-4a18-4001-baae-5d3a05b4ed2a" containerName="nova-api-log" containerID="cri-o://e92834946d2d7421ec0755e6ed699a87e22dc84ea665a71967ebd27b38ede6c2" gracePeriod=30 Feb 19 10:29:41 crc kubenswrapper[4811]: I0219 10:29:41.744587 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="086be806-4a18-4001-baae-5d3a05b4ed2a" containerName="nova-api-api" containerID="cri-o://3d3eee6647f3c59badf0c31e12e1e4f3738f9de77de8af2c3f12a896ffbd57d5" gracePeriod=30 Feb 19 10:29:41 crc kubenswrapper[4811]: I0219 10:29:41.784631 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:29:41 crc kubenswrapper[4811]: I0219 10:29:41.794859 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:29:41 crc kubenswrapper[4811]: I0219 10:29:41.795107 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e3286173-ce4c-4d45-9b0e-ce88b5063d93" containerName="nova-metadata-log" containerID="cri-o://10bbf1c6db96a2abfaa29e070738505a45621dc64ff33ffc7191eb28833e888d" gracePeriod=30 Feb 19 10:29:41 crc kubenswrapper[4811]: I0219 10:29:41.795737 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e3286173-ce4c-4d45-9b0e-ce88b5063d93" containerName="nova-metadata-metadata" containerID="cri-o://daf2f850c4c06a9e1eaafafe207ef9ce7c86fe74344ba17061e8c2041865cc44" gracePeriod=30 Feb 19 10:29:41 crc kubenswrapper[4811]: I0219 10:29:41.846088 4811 scope.go:117] "RemoveContainer" containerID="73cc025a1567957d2afc71121863816e1f341f3be74bf2a4abb8ce0ce5ccf057" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.167068 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.167126 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.195755 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-2472l" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.311518 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04-scripts\") pod \"1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04\" (UID: \"1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04\") " Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.311700 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04-combined-ca-bundle\") pod \"1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04\" (UID: \"1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04\") " Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.311755 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87xfg\" (UniqueName: \"kubernetes.io/projected/1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04-kube-api-access-87xfg\") pod \"1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04\" (UID: \"1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04\") " Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.311777 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04-config-data\") pod \"1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04\" (UID: \"1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04\") " Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.321817 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04-scripts" (OuterVolumeSpecName: "scripts") pod "1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04" (UID: "1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.327869 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04-kube-api-access-87xfg" (OuterVolumeSpecName: "kube-api-access-87xfg") pod "1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04" (UID: "1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04"). InnerVolumeSpecName "kube-api-access-87xfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.352622 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04" (UID: "1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.352697 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04-config-data" (OuterVolumeSpecName: "config-data") pod "1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04" (UID: "1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.414191 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.414226 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87xfg\" (UniqueName: \"kubernetes.io/projected/1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04-kube-api-access-87xfg\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.414256 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.414266 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.414841 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.515351 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3286173-ce4c-4d45-9b0e-ce88b5063d93-nova-metadata-tls-certs\") pod \"e3286173-ce4c-4d45-9b0e-ce88b5063d93\" (UID: \"e3286173-ce4c-4d45-9b0e-ce88b5063d93\") " Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.515575 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3286173-ce4c-4d45-9b0e-ce88b5063d93-combined-ca-bundle\") pod \"e3286173-ce4c-4d45-9b0e-ce88b5063d93\" (UID: \"e3286173-ce4c-4d45-9b0e-ce88b5063d93\") " Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.515650 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98456\" (UniqueName: \"kubernetes.io/projected/e3286173-ce4c-4d45-9b0e-ce88b5063d93-kube-api-access-98456\") pod \"e3286173-ce4c-4d45-9b0e-ce88b5063d93\" (UID: \"e3286173-ce4c-4d45-9b0e-ce88b5063d93\") " Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.515920 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3286173-ce4c-4d45-9b0e-ce88b5063d93-logs\") pod \"e3286173-ce4c-4d45-9b0e-ce88b5063d93\" (UID: \"e3286173-ce4c-4d45-9b0e-ce88b5063d93\") " Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.516098 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3286173-ce4c-4d45-9b0e-ce88b5063d93-config-data\") pod \"e3286173-ce4c-4d45-9b0e-ce88b5063d93\" (UID: \"e3286173-ce4c-4d45-9b0e-ce88b5063d93\") " Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.516930 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3286173-ce4c-4d45-9b0e-ce88b5063d93-logs" (OuterVolumeSpecName: "logs") pod "e3286173-ce4c-4d45-9b0e-ce88b5063d93" (UID: "e3286173-ce4c-4d45-9b0e-ce88b5063d93"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.521937 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3286173-ce4c-4d45-9b0e-ce88b5063d93-kube-api-access-98456" (OuterVolumeSpecName: "kube-api-access-98456") pod "e3286173-ce4c-4d45-9b0e-ce88b5063d93" (UID: "e3286173-ce4c-4d45-9b0e-ce88b5063d93"). InnerVolumeSpecName "kube-api-access-98456". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.554092 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3286173-ce4c-4d45-9b0e-ce88b5063d93-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3286173-ce4c-4d45-9b0e-ce88b5063d93" (UID: "e3286173-ce4c-4d45-9b0e-ce88b5063d93"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.554920 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3286173-ce4c-4d45-9b0e-ce88b5063d93-config-data" (OuterVolumeSpecName: "config-data") pod "e3286173-ce4c-4d45-9b0e-ce88b5063d93" (UID: "e3286173-ce4c-4d45-9b0e-ce88b5063d93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.581107 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-2472l" event={"ID":"1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04","Type":"ContainerDied","Data":"f093fd9f1ff140acef93e541a94329822c58af8aa8f110e399b83dd775bb4153"} Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.581160 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f093fd9f1ff140acef93e541a94329822c58af8aa8f110e399b83dd775bb4153" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.581234 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-2472l" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.583926 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3286173-ce4c-4d45-9b0e-ce88b5063d93-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "e3286173-ce4c-4d45-9b0e-ce88b5063d93" (UID: "e3286173-ce4c-4d45-9b0e-ce88b5063d93"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.591056 4811 generic.go:334] "Generic (PLEG): container finished" podID="086be806-4a18-4001-baae-5d3a05b4ed2a" containerID="e92834946d2d7421ec0755e6ed699a87e22dc84ea665a71967ebd27b38ede6c2" exitCode=143 Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.604317 4811 generic.go:334] "Generic (PLEG): container finished" podID="e3286173-ce4c-4d45-9b0e-ce88b5063d93" containerID="daf2f850c4c06a9e1eaafafe207ef9ce7c86fe74344ba17061e8c2041865cc44" exitCode=0 Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.604362 4811 generic.go:334] "Generic (PLEG): container finished" podID="e3286173-ce4c-4d45-9b0e-ce88b5063d93" containerID="10bbf1c6db96a2abfaa29e070738505a45621dc64ff33ffc7191eb28833e888d" exitCode=143 Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.605269 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.607024 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ab45e9b-6cfc-48a3-b603-f47d47bbd510" path="/var/lib/kubelet/pods/4ab45e9b-6cfc-48a3-b603-f47d47bbd510/volumes" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.609467 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cda0926f-a0e7-488c-a0a4-6404d2720495" path="/var/lib/kubelet/pods/cda0926f-a0e7-488c-a0a4-6404d2720495/volumes" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.613790 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae" containerName="nova-scheduler-scheduler" containerID="cri-o://27f1dcc0efe3cae42b79e032a8875623167e03a3fd7b27bb00b634a8112e5f0c" gracePeriod=30 Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.629591 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3286173-ce4c-4d45-9b0e-ce88b5063d93-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.629883 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"086be806-4a18-4001-baae-5d3a05b4ed2a","Type":"ContainerDied","Data":"e92834946d2d7421ec0755e6ed699a87e22dc84ea665a71967ebd27b38ede6c2"} Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.629924 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e3286173-ce4c-4d45-9b0e-ce88b5063d93","Type":"ContainerDied","Data":"daf2f850c4c06a9e1eaafafe207ef9ce7c86fe74344ba17061e8c2041865cc44"} Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.629943 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e3286173-ce4c-4d45-9b0e-ce88b5063d93","Type":"ContainerDied","Data":"10bbf1c6db96a2abfaa29e070738505a45621dc64ff33ffc7191eb28833e888d"} Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.629955 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e3286173-ce4c-4d45-9b0e-ce88b5063d93","Type":"ContainerDied","Data":"e42db80604c71c36052b047252f070a481de0c9e7f146b0b9d29ec163aa40b51"} Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.629977 4811 scope.go:117] "RemoveContainer" containerID="daf2f850c4c06a9e1eaafafe207ef9ce7c86fe74344ba17061e8c2041865cc44" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.630102 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98456\" (UniqueName: \"kubernetes.io/projected/e3286173-ce4c-4d45-9b0e-ce88b5063d93-kube-api-access-98456\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.630118 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3286173-ce4c-4d45-9b0e-ce88b5063d93-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.630128 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3286173-ce4c-4d45-9b0e-ce88b5063d93-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.630139 4811 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3286173-ce4c-4d45-9b0e-ce88b5063d93-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.654841 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 10:29:42 crc kubenswrapper[4811]: E0219 10:29:42.655335 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ab45e9b-6cfc-48a3-b603-f47d47bbd510" containerName="horizon-log" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.655356 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ab45e9b-6cfc-48a3-b603-f47d47bbd510" containerName="horizon-log" Feb 19 10:29:42 crc kubenswrapper[4811]: E0219 10:29:42.655371 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3286173-ce4c-4d45-9b0e-ce88b5063d93" containerName="nova-metadata-log" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.655380 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3286173-ce4c-4d45-9b0e-ce88b5063d93" containerName="nova-metadata-log" Feb 19 10:29:42 crc kubenswrapper[4811]: E0219 10:29:42.655394 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cda0926f-a0e7-488c-a0a4-6404d2720495" containerName="dnsmasq-dns" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.655401 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="cda0926f-a0e7-488c-a0a4-6404d2720495" containerName="dnsmasq-dns" Feb 19 10:29:42 crc kubenswrapper[4811]: E0219 10:29:42.655413 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2207a190-4ccd-4560-890a-b9d6c5255f6a" containerName="nova-manage" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.655419 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="2207a190-4ccd-4560-890a-b9d6c5255f6a" containerName="nova-manage" Feb 19 10:29:42 crc kubenswrapper[4811]: E0219 10:29:42.655438 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3286173-ce4c-4d45-9b0e-ce88b5063d93" containerName="nova-metadata-metadata" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.655458 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3286173-ce4c-4d45-9b0e-ce88b5063d93" containerName="nova-metadata-metadata" Feb 19 10:29:42 crc kubenswrapper[4811]: E0219 10:29:42.655470 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ab45e9b-6cfc-48a3-b603-f47d47bbd510" containerName="horizon" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.655477 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ab45e9b-6cfc-48a3-b603-f47d47bbd510" containerName="horizon" Feb 19 10:29:42 crc kubenswrapper[4811]: E0219 10:29:42.655500 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04" containerName="nova-cell1-conductor-db-sync" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.655507 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04" containerName="nova-cell1-conductor-db-sync" Feb 19 10:29:42 crc kubenswrapper[4811]: E0219 10:29:42.655519 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ab45e9b-6cfc-48a3-b603-f47d47bbd510" containerName="horizon" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.655525 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ab45e9b-6cfc-48a3-b603-f47d47bbd510" containerName="horizon" Feb 19 10:29:42 crc kubenswrapper[4811]: E0219 10:29:42.655541 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cda0926f-a0e7-488c-a0a4-6404d2720495" containerName="init" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.655548 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="cda0926f-a0e7-488c-a0a4-6404d2720495" containerName="init" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.655768 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04" containerName="nova-cell1-conductor-db-sync" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.655792 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ab45e9b-6cfc-48a3-b603-f47d47bbd510" containerName="horizon" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.655801 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="cda0926f-a0e7-488c-a0a4-6404d2720495" containerName="dnsmasq-dns" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.655808 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3286173-ce4c-4d45-9b0e-ce88b5063d93" containerName="nova-metadata-log" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.655818 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ab45e9b-6cfc-48a3-b603-f47d47bbd510" containerName="horizon-log" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.655829 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3286173-ce4c-4d45-9b0e-ce88b5063d93" containerName="nova-metadata-metadata" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.655841 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="2207a190-4ccd-4560-890a-b9d6c5255f6a" containerName="nova-manage" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.656521 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.660775 4811 scope.go:117] "RemoveContainer" containerID="10bbf1c6db96a2abfaa29e070738505a45621dc64ff33ffc7191eb28833e888d" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.668736 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.668895 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.685854 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.725943 4811 scope.go:117] "RemoveContainer" containerID="daf2f850c4c06a9e1eaafafe207ef9ce7c86fe74344ba17061e8c2041865cc44" Feb 19 10:29:42 crc kubenswrapper[4811]: E0219 10:29:42.727589 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daf2f850c4c06a9e1eaafafe207ef9ce7c86fe74344ba17061e8c2041865cc44\": container with ID starting with daf2f850c4c06a9e1eaafafe207ef9ce7c86fe74344ba17061e8c2041865cc44 not found: ID does not exist" containerID="daf2f850c4c06a9e1eaafafe207ef9ce7c86fe74344ba17061e8c2041865cc44" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.727637 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daf2f850c4c06a9e1eaafafe207ef9ce7c86fe74344ba17061e8c2041865cc44"} err="failed to get container status \"daf2f850c4c06a9e1eaafafe207ef9ce7c86fe74344ba17061e8c2041865cc44\": rpc error: code = NotFound desc = could not find container \"daf2f850c4c06a9e1eaafafe207ef9ce7c86fe74344ba17061e8c2041865cc44\": container with ID starting with daf2f850c4c06a9e1eaafafe207ef9ce7c86fe74344ba17061e8c2041865cc44 not found: ID does not exist" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.727689 4811 scope.go:117] "RemoveContainer" containerID="10bbf1c6db96a2abfaa29e070738505a45621dc64ff33ffc7191eb28833e888d" Feb 19 10:29:42 crc kubenswrapper[4811]: E0219 10:29:42.731470 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10bbf1c6db96a2abfaa29e070738505a45621dc64ff33ffc7191eb28833e888d\": container with ID starting with 10bbf1c6db96a2abfaa29e070738505a45621dc64ff33ffc7191eb28833e888d not found: ID does not exist" containerID="10bbf1c6db96a2abfaa29e070738505a45621dc64ff33ffc7191eb28833e888d" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.731549 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10bbf1c6db96a2abfaa29e070738505a45621dc64ff33ffc7191eb28833e888d"} err="failed to get container status \"10bbf1c6db96a2abfaa29e070738505a45621dc64ff33ffc7191eb28833e888d\": rpc error: code = NotFound desc = could not find container \"10bbf1c6db96a2abfaa29e070738505a45621dc64ff33ffc7191eb28833e888d\": container with ID starting with 10bbf1c6db96a2abfaa29e070738505a45621dc64ff33ffc7191eb28833e888d not found: ID does not exist" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.731601 4811 scope.go:117] "RemoveContainer" containerID="daf2f850c4c06a9e1eaafafe207ef9ce7c86fe74344ba17061e8c2041865cc44" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.733466 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daf2f850c4c06a9e1eaafafe207ef9ce7c86fe74344ba17061e8c2041865cc44"} err="failed to get container status \"daf2f850c4c06a9e1eaafafe207ef9ce7c86fe74344ba17061e8c2041865cc44\": rpc error: code = NotFound desc = could not find container \"daf2f850c4c06a9e1eaafafe207ef9ce7c86fe74344ba17061e8c2041865cc44\": container with ID starting with daf2f850c4c06a9e1eaafafe207ef9ce7c86fe74344ba17061e8c2041865cc44 not found: ID does not exist" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.733537 4811 scope.go:117] "RemoveContainer" containerID="10bbf1c6db96a2abfaa29e070738505a45621dc64ff33ffc7191eb28833e888d" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.735978 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10c31d4c-29cf-4dec-8cf1-98f5f1e804a8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"10c31d4c-29cf-4dec-8cf1-98f5f1e804a8\") " pod="openstack/nova-cell1-conductor-0" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.736186 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10c31d4c-29cf-4dec-8cf1-98f5f1e804a8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"10c31d4c-29cf-4dec-8cf1-98f5f1e804a8\") " pod="openstack/nova-cell1-conductor-0" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.736416 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wld86\" (UniqueName: \"kubernetes.io/projected/10c31d4c-29cf-4dec-8cf1-98f5f1e804a8-kube-api-access-wld86\") pod \"nova-cell1-conductor-0\" (UID: \"10c31d4c-29cf-4dec-8cf1-98f5f1e804a8\") " pod="openstack/nova-cell1-conductor-0" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.739315 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.743517 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10bbf1c6db96a2abfaa29e070738505a45621dc64ff33ffc7191eb28833e888d"} err="failed to get container status \"10bbf1c6db96a2abfaa29e070738505a45621dc64ff33ffc7191eb28833e888d\": rpc error: code = NotFound desc = could not find container \"10bbf1c6db96a2abfaa29e070738505a45621dc64ff33ffc7191eb28833e888d\": container with ID starting with 10bbf1c6db96a2abfaa29e070738505a45621dc64ff33ffc7191eb28833e888d not found: ID does not exist" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.770009 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.772817 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ab45e9b-6cfc-48a3-b603-f47d47bbd510" containerName="horizon" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.774126 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.776137 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.779902 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.781059 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.843146 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10c31d4c-29cf-4dec-8cf1-98f5f1e804a8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"10c31d4c-29cf-4dec-8cf1-98f5f1e804a8\") " pod="openstack/nova-cell1-conductor-0" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.843221 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e765f8-1094-4b4e-a5a2-5dc86e28c08d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a1e765f8-1094-4b4e-a5a2-5dc86e28c08d\") " pod="openstack/nova-metadata-0" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.843270 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e765f8-1094-4b4e-a5a2-5dc86e28c08d-config-data\") pod \"nova-metadata-0\" (UID: \"a1e765f8-1094-4b4e-a5a2-5dc86e28c08d\") " pod="openstack/nova-metadata-0" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.843293 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gg5c\" (UniqueName: \"kubernetes.io/projected/a1e765f8-1094-4b4e-a5a2-5dc86e28c08d-kube-api-access-7gg5c\") pod \"nova-metadata-0\" (UID: \"a1e765f8-1094-4b4e-a5a2-5dc86e28c08d\") " pod="openstack/nova-metadata-0" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.843348 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1e765f8-1094-4b4e-a5a2-5dc86e28c08d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a1e765f8-1094-4b4e-a5a2-5dc86e28c08d\") " pod="openstack/nova-metadata-0" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.843377 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wld86\" (UniqueName: \"kubernetes.io/projected/10c31d4c-29cf-4dec-8cf1-98f5f1e804a8-kube-api-access-wld86\") pod \"nova-cell1-conductor-0\" (UID: \"10c31d4c-29cf-4dec-8cf1-98f5f1e804a8\") " pod="openstack/nova-cell1-conductor-0" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.843424 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1e765f8-1094-4b4e-a5a2-5dc86e28c08d-logs\") pod \"nova-metadata-0\" (UID: \"a1e765f8-1094-4b4e-a5a2-5dc86e28c08d\") " pod="openstack/nova-metadata-0" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.843493 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10c31d4c-29cf-4dec-8cf1-98f5f1e804a8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"10c31d4c-29cf-4dec-8cf1-98f5f1e804a8\") " pod="openstack/nova-cell1-conductor-0" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.852308 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10c31d4c-29cf-4dec-8cf1-98f5f1e804a8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"10c31d4c-29cf-4dec-8cf1-98f5f1e804a8\") " pod="openstack/nova-cell1-conductor-0" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.852921 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10c31d4c-29cf-4dec-8cf1-98f5f1e804a8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"10c31d4c-29cf-4dec-8cf1-98f5f1e804a8\") " pod="openstack/nova-cell1-conductor-0" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.869189 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wld86\" (UniqueName: \"kubernetes.io/projected/10c31d4c-29cf-4dec-8cf1-98f5f1e804a8-kube-api-access-wld86\") pod \"nova-cell1-conductor-0\" (UID: \"10c31d4c-29cf-4dec-8cf1-98f5f1e804a8\") " pod="openstack/nova-cell1-conductor-0" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.945497 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e765f8-1094-4b4e-a5a2-5dc86e28c08d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a1e765f8-1094-4b4e-a5a2-5dc86e28c08d\") " pod="openstack/nova-metadata-0" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.945567 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e765f8-1094-4b4e-a5a2-5dc86e28c08d-config-data\") pod \"nova-metadata-0\" (UID: \"a1e765f8-1094-4b4e-a5a2-5dc86e28c08d\") " pod="openstack/nova-metadata-0" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.946131 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gg5c\" (UniqueName: \"kubernetes.io/projected/a1e765f8-1094-4b4e-a5a2-5dc86e28c08d-kube-api-access-7gg5c\") pod \"nova-metadata-0\" (UID: \"a1e765f8-1094-4b4e-a5a2-5dc86e28c08d\") " pod="openstack/nova-metadata-0" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.946181 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1e765f8-1094-4b4e-a5a2-5dc86e28c08d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a1e765f8-1094-4b4e-a5a2-5dc86e28c08d\") " pod="openstack/nova-metadata-0" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.946237 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1e765f8-1094-4b4e-a5a2-5dc86e28c08d-logs\") pod \"nova-metadata-0\" (UID: \"a1e765f8-1094-4b4e-a5a2-5dc86e28c08d\") " pod="openstack/nova-metadata-0" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.946742 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1e765f8-1094-4b4e-a5a2-5dc86e28c08d-logs\") pod \"nova-metadata-0\" (UID: \"a1e765f8-1094-4b4e-a5a2-5dc86e28c08d\") " pod="openstack/nova-metadata-0" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.949119 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e765f8-1094-4b4e-a5a2-5dc86e28c08d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a1e765f8-1094-4b4e-a5a2-5dc86e28c08d\") " pod="openstack/nova-metadata-0" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.949627 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e765f8-1094-4b4e-a5a2-5dc86e28c08d-config-data\") pod \"nova-metadata-0\" (UID: \"a1e765f8-1094-4b4e-a5a2-5dc86e28c08d\") " pod="openstack/nova-metadata-0" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.952304 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1e765f8-1094-4b4e-a5a2-5dc86e28c08d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a1e765f8-1094-4b4e-a5a2-5dc86e28c08d\") " pod="openstack/nova-metadata-0" Feb 19 10:29:42 crc kubenswrapper[4811]: I0219 10:29:42.964273 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gg5c\" (UniqueName: \"kubernetes.io/projected/a1e765f8-1094-4b4e-a5a2-5dc86e28c08d-kube-api-access-7gg5c\") pod \"nova-metadata-0\" (UID: \"a1e765f8-1094-4b4e-a5a2-5dc86e28c08d\") " pod="openstack/nova-metadata-0" Feb 19 10:29:43 crc kubenswrapper[4811]: I0219 10:29:43.008689 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 10:29:43 crc kubenswrapper[4811]: I0219 10:29:43.091479 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:29:43 crc kubenswrapper[4811]: I0219 10:29:43.520307 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:29:43 crc kubenswrapper[4811]: I0219 10:29:43.560649 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 10:29:43 crc kubenswrapper[4811]: W0219 10:29:43.578060 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10c31d4c_29cf_4dec_8cf1_98f5f1e804a8.slice/crio-525c4aca0ebbe18c3a23b4a3b11883785f57a8aa7df5aaeea1372ce1045e5ea5 WatchSource:0}: Error finding container 525c4aca0ebbe18c3a23b4a3b11883785f57a8aa7df5aaeea1372ce1045e5ea5: Status 404 returned error can't find the container with id 525c4aca0ebbe18c3a23b4a3b11883785f57a8aa7df5aaeea1372ce1045e5ea5 Feb 19 10:29:43 crc kubenswrapper[4811]: I0219 10:29:43.625202 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"10c31d4c-29cf-4dec-8cf1-98f5f1e804a8","Type":"ContainerStarted","Data":"525c4aca0ebbe18c3a23b4a3b11883785f57a8aa7df5aaeea1372ce1045e5ea5"} Feb 19 10:29:43 crc kubenswrapper[4811]: I0219 10:29:43.626254 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a1e765f8-1094-4b4e-a5a2-5dc86e28c08d","Type":"ContainerStarted","Data":"4ec2080b166da2ccd3ddbeb420930ebd340ac2ebea10ddcc3545479b208a0413"} Feb 19 10:29:44 crc kubenswrapper[4811]: I0219 10:29:44.596371 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3286173-ce4c-4d45-9b0e-ce88b5063d93" path="/var/lib/kubelet/pods/e3286173-ce4c-4d45-9b0e-ce88b5063d93/volumes" Feb 19 10:29:44 crc kubenswrapper[4811]: I0219 10:29:44.642832 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"10c31d4c-29cf-4dec-8cf1-98f5f1e804a8","Type":"ContainerStarted","Data":"0e0a37d1f79223a85e7fcdf3fd7ab2466a88506c505cb959f1ec917592a2743b"} Feb 19 10:29:44 crc kubenswrapper[4811]: I0219 10:29:44.642910 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 19 10:29:44 crc kubenswrapper[4811]: I0219 10:29:44.645569 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a1e765f8-1094-4b4e-a5a2-5dc86e28c08d","Type":"ContainerStarted","Data":"b2c9f892450efb4364ca9adcac51196aaa7405233147e8e46cf7fda73960088e"} Feb 19 10:29:44 crc kubenswrapper[4811]: I0219 10:29:44.645654 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a1e765f8-1094-4b4e-a5a2-5dc86e28c08d","Type":"ContainerStarted","Data":"dd32eec941c4e991f298fa078cd0864b1db0f3b062ca4f5520f8c9ed7331e8b0"} Feb 19 10:29:44 crc kubenswrapper[4811]: I0219 10:29:44.667543 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.667516603 podStartE2EDuration="2.667516603s" podCreationTimestamp="2026-02-19 10:29:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:29:44.661082818 +0000 UTC m=+1273.187547514" watchObservedRunningTime="2026-02-19 10:29:44.667516603 +0000 UTC m=+1273.193981289" Feb 19 10:29:44 crc kubenswrapper[4811]: E0219 10:29:44.668518 4811 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="27f1dcc0efe3cae42b79e032a8875623167e03a3fd7b27bb00b634a8112e5f0c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 10:29:44 crc kubenswrapper[4811]: E0219 10:29:44.669561 4811 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="27f1dcc0efe3cae42b79e032a8875623167e03a3fd7b27bb00b634a8112e5f0c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 10:29:44 crc kubenswrapper[4811]: E0219 10:29:44.681617 4811 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="27f1dcc0efe3cae42b79e032a8875623167e03a3fd7b27bb00b634a8112e5f0c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 10:29:44 crc kubenswrapper[4811]: E0219 10:29:44.681872 4811 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae" containerName="nova-scheduler-scheduler" Feb 19 10:29:44 crc kubenswrapper[4811]: I0219 10:29:44.700601 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.700578327 podStartE2EDuration="2.700578327s" podCreationTimestamp="2026-02-19 10:29:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:29:44.69011847 +0000 UTC m=+1273.216583186" watchObservedRunningTime="2026-02-19 10:29:44.700578327 +0000 UTC m=+1273.227043013" Feb 19 10:29:46 crc kubenswrapper[4811]: I0219 10:29:46.429761 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 10:29:46 crc kubenswrapper[4811]: I0219 10:29:46.532670 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae-config-data\") pod \"1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae\" (UID: \"1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae\") " Feb 19 10:29:46 crc kubenswrapper[4811]: I0219 10:29:46.533014 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae-combined-ca-bundle\") pod \"1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae\" (UID: \"1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae\") " Feb 19 10:29:46 crc kubenswrapper[4811]: I0219 10:29:46.533226 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tg5x\" (UniqueName: \"kubernetes.io/projected/1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae-kube-api-access-9tg5x\") pod \"1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae\" (UID: \"1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae\") " Feb 19 10:29:46 crc kubenswrapper[4811]: I0219 10:29:46.541330 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae-kube-api-access-9tg5x" (OuterVolumeSpecName: "kube-api-access-9tg5x") pod "1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae" (UID: "1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae"). InnerVolumeSpecName "kube-api-access-9tg5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:29:46 crc kubenswrapper[4811]: I0219 10:29:46.566909 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae-config-data" (OuterVolumeSpecName: "config-data") pod "1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae" (UID: "1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:46 crc kubenswrapper[4811]: I0219 10:29:46.569064 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae" (UID: "1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:46 crc kubenswrapper[4811]: I0219 10:29:46.638129 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:46 crc kubenswrapper[4811]: I0219 10:29:46.638476 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tg5x\" (UniqueName: \"kubernetes.io/projected/1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae-kube-api-access-9tg5x\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:46 crc kubenswrapper[4811]: I0219 10:29:46.638489 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:46 crc kubenswrapper[4811]: I0219 10:29:46.668478 4811 generic.go:334] "Generic (PLEG): container finished" podID="1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae" containerID="27f1dcc0efe3cae42b79e032a8875623167e03a3fd7b27bb00b634a8112e5f0c" exitCode=0 Feb 19 10:29:46 crc kubenswrapper[4811]: I0219 10:29:46.668530 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae","Type":"ContainerDied","Data":"27f1dcc0efe3cae42b79e032a8875623167e03a3fd7b27bb00b634a8112e5f0c"} Feb 19 10:29:46 crc kubenswrapper[4811]: I0219 10:29:46.668584 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 10:29:46 crc kubenswrapper[4811]: I0219 10:29:46.668598 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae","Type":"ContainerDied","Data":"e0d2ba84862faffead8948bf770827876be338fdc045bde9e35338d1cc3f0a87"} Feb 19 10:29:46 crc kubenswrapper[4811]: I0219 10:29:46.668621 4811 scope.go:117] "RemoveContainer" containerID="27f1dcc0efe3cae42b79e032a8875623167e03a3fd7b27bb00b634a8112e5f0c" Feb 19 10:29:46 crc kubenswrapper[4811]: I0219 10:29:46.693656 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:29:46 crc kubenswrapper[4811]: I0219 10:29:46.697981 4811 scope.go:117] "RemoveContainer" containerID="27f1dcc0efe3cae42b79e032a8875623167e03a3fd7b27bb00b634a8112e5f0c" Feb 19 10:29:46 crc kubenswrapper[4811]: E0219 10:29:46.698564 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27f1dcc0efe3cae42b79e032a8875623167e03a3fd7b27bb00b634a8112e5f0c\": container with ID starting with 27f1dcc0efe3cae42b79e032a8875623167e03a3fd7b27bb00b634a8112e5f0c not found: ID does not exist" containerID="27f1dcc0efe3cae42b79e032a8875623167e03a3fd7b27bb00b634a8112e5f0c" Feb 19 10:29:46 crc kubenswrapper[4811]: I0219 10:29:46.698597 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27f1dcc0efe3cae42b79e032a8875623167e03a3fd7b27bb00b634a8112e5f0c"} err="failed to get container status \"27f1dcc0efe3cae42b79e032a8875623167e03a3fd7b27bb00b634a8112e5f0c\": rpc error: code = NotFound desc = could not find container \"27f1dcc0efe3cae42b79e032a8875623167e03a3fd7b27bb00b634a8112e5f0c\": container with ID starting with 27f1dcc0efe3cae42b79e032a8875623167e03a3fd7b27bb00b634a8112e5f0c not found: ID does not exist" Feb 19 10:29:46 crc kubenswrapper[4811]: I0219 10:29:46.702868 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:29:46 crc kubenswrapper[4811]: I0219 10:29:46.711918 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:29:46 crc kubenswrapper[4811]: E0219 10:29:46.712313 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae" containerName="nova-scheduler-scheduler" Feb 19 10:29:46 crc kubenswrapper[4811]: I0219 10:29:46.712331 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae" containerName="nova-scheduler-scheduler" Feb 19 10:29:46 crc kubenswrapper[4811]: I0219 10:29:46.712571 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae" containerName="nova-scheduler-scheduler" Feb 19 10:29:46 crc kubenswrapper[4811]: I0219 10:29:46.713240 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 10:29:46 crc kubenswrapper[4811]: I0219 10:29:46.725270 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:29:46 crc kubenswrapper[4811]: I0219 10:29:46.725289 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 10:29:46 crc kubenswrapper[4811]: I0219 10:29:46.842789 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88rq9\" (UniqueName: \"kubernetes.io/projected/79d85ecb-0aba-485f-a679-452a09aed5d4-kube-api-access-88rq9\") pod \"nova-scheduler-0\" (UID: \"79d85ecb-0aba-485f-a679-452a09aed5d4\") " pod="openstack/nova-scheduler-0" Feb 19 10:29:46 crc kubenswrapper[4811]: I0219 10:29:46.843259 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d85ecb-0aba-485f-a679-452a09aed5d4-config-data\") pod \"nova-scheduler-0\" (UID: \"79d85ecb-0aba-485f-a679-452a09aed5d4\") " pod="openstack/nova-scheduler-0" Feb 19 10:29:46 crc kubenswrapper[4811]: I0219 10:29:46.843501 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d85ecb-0aba-485f-a679-452a09aed5d4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"79d85ecb-0aba-485f-a679-452a09aed5d4\") " pod="openstack/nova-scheduler-0" Feb 19 10:29:46 crc kubenswrapper[4811]: I0219 10:29:46.945760 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d85ecb-0aba-485f-a679-452a09aed5d4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"79d85ecb-0aba-485f-a679-452a09aed5d4\") " pod="openstack/nova-scheduler-0" Feb 19 10:29:46 crc kubenswrapper[4811]: I0219 10:29:46.946097 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88rq9\" (UniqueName: \"kubernetes.io/projected/79d85ecb-0aba-485f-a679-452a09aed5d4-kube-api-access-88rq9\") pod \"nova-scheduler-0\" (UID: \"79d85ecb-0aba-485f-a679-452a09aed5d4\") " pod="openstack/nova-scheduler-0" Feb 19 10:29:46 crc kubenswrapper[4811]: I0219 10:29:46.946225 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d85ecb-0aba-485f-a679-452a09aed5d4-config-data\") pod \"nova-scheduler-0\" (UID: \"79d85ecb-0aba-485f-a679-452a09aed5d4\") " pod="openstack/nova-scheduler-0" Feb 19 10:29:46 crc kubenswrapper[4811]: I0219 10:29:46.950728 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d85ecb-0aba-485f-a679-452a09aed5d4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"79d85ecb-0aba-485f-a679-452a09aed5d4\") " pod="openstack/nova-scheduler-0" Feb 19 10:29:46 crc kubenswrapper[4811]: I0219 10:29:46.952328 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d85ecb-0aba-485f-a679-452a09aed5d4-config-data\") pod \"nova-scheduler-0\" (UID: \"79d85ecb-0aba-485f-a679-452a09aed5d4\") " pod="openstack/nova-scheduler-0" Feb 19 10:29:46 crc kubenswrapper[4811]: I0219 10:29:46.962237 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88rq9\" (UniqueName: \"kubernetes.io/projected/79d85ecb-0aba-485f-a679-452a09aed5d4-kube-api-access-88rq9\") pod \"nova-scheduler-0\" (UID: \"79d85ecb-0aba-485f-a679-452a09aed5d4\") " pod="openstack/nova-scheduler-0" Feb 19 10:29:47 crc kubenswrapper[4811]: I0219 10:29:47.028575 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 10:29:47 crc kubenswrapper[4811]: W0219 10:29:47.553310 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79d85ecb_0aba_485f_a679_452a09aed5d4.slice/crio-ad72480d8eae9ae29343e76d69705421daf195630403f7001e2922c310074732 WatchSource:0}: Error finding container ad72480d8eae9ae29343e76d69705421daf195630403f7001e2922c310074732: Status 404 returned error can't find the container with id ad72480d8eae9ae29343e76d69705421daf195630403f7001e2922c310074732 Feb 19 10:29:47 crc kubenswrapper[4811]: I0219 10:29:47.557356 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:29:47 crc kubenswrapper[4811]: I0219 10:29:47.641415 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:29:47 crc kubenswrapper[4811]: I0219 10:29:47.687989 4811 generic.go:334] "Generic (PLEG): container finished" podID="086be806-4a18-4001-baae-5d3a05b4ed2a" containerID="3d3eee6647f3c59badf0c31e12e1e4f3738f9de77de8af2c3f12a896ffbd57d5" exitCode=0 Feb 19 10:29:47 crc kubenswrapper[4811]: I0219 10:29:47.688087 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"086be806-4a18-4001-baae-5d3a05b4ed2a","Type":"ContainerDied","Data":"3d3eee6647f3c59badf0c31e12e1e4f3738f9de77de8af2c3f12a896ffbd57d5"} Feb 19 10:29:47 crc kubenswrapper[4811]: I0219 10:29:47.688129 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"086be806-4a18-4001-baae-5d3a05b4ed2a","Type":"ContainerDied","Data":"a692aa4c53d2a372eb480cde8f1d1502e0ddc354ce5fbc230af2715509ba50ff"} Feb 19 10:29:47 crc kubenswrapper[4811]: I0219 10:29:47.688159 4811 scope.go:117] "RemoveContainer" containerID="3d3eee6647f3c59badf0c31e12e1e4f3738f9de77de8af2c3f12a896ffbd57d5" Feb 19 10:29:47 crc kubenswrapper[4811]: I0219 10:29:47.688339 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:29:47 crc kubenswrapper[4811]: I0219 10:29:47.696390 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"79d85ecb-0aba-485f-a679-452a09aed5d4","Type":"ContainerStarted","Data":"ad72480d8eae9ae29343e76d69705421daf195630403f7001e2922c310074732"} Feb 19 10:29:47 crc kubenswrapper[4811]: I0219 10:29:47.728283 4811 scope.go:117] "RemoveContainer" containerID="e92834946d2d7421ec0755e6ed699a87e22dc84ea665a71967ebd27b38ede6c2" Feb 19 10:29:47 crc kubenswrapper[4811]: I0219 10:29:47.749097 4811 scope.go:117] "RemoveContainer" containerID="3d3eee6647f3c59badf0c31e12e1e4f3738f9de77de8af2c3f12a896ffbd57d5" Feb 19 10:29:47 crc kubenswrapper[4811]: E0219 10:29:47.750143 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d3eee6647f3c59badf0c31e12e1e4f3738f9de77de8af2c3f12a896ffbd57d5\": container with ID starting with 3d3eee6647f3c59badf0c31e12e1e4f3738f9de77de8af2c3f12a896ffbd57d5 not found: ID does not exist" containerID="3d3eee6647f3c59badf0c31e12e1e4f3738f9de77de8af2c3f12a896ffbd57d5" Feb 19 10:29:47 crc kubenswrapper[4811]: I0219 10:29:47.750182 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d3eee6647f3c59badf0c31e12e1e4f3738f9de77de8af2c3f12a896ffbd57d5"} err="failed to get container status \"3d3eee6647f3c59badf0c31e12e1e4f3738f9de77de8af2c3f12a896ffbd57d5\": rpc error: code = NotFound desc = could not find container \"3d3eee6647f3c59badf0c31e12e1e4f3738f9de77de8af2c3f12a896ffbd57d5\": container with ID starting with 3d3eee6647f3c59badf0c31e12e1e4f3738f9de77de8af2c3f12a896ffbd57d5 not found: ID does not exist" Feb 19 10:29:47 crc kubenswrapper[4811]: I0219 10:29:47.750210 4811 scope.go:117] "RemoveContainer" containerID="e92834946d2d7421ec0755e6ed699a87e22dc84ea665a71967ebd27b38ede6c2" Feb 19 10:29:47 crc kubenswrapper[4811]: E0219 10:29:47.750689 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e92834946d2d7421ec0755e6ed699a87e22dc84ea665a71967ebd27b38ede6c2\": container with ID starting with e92834946d2d7421ec0755e6ed699a87e22dc84ea665a71967ebd27b38ede6c2 not found: ID does not exist" containerID="e92834946d2d7421ec0755e6ed699a87e22dc84ea665a71967ebd27b38ede6c2" Feb 19 10:29:47 crc kubenswrapper[4811]: I0219 10:29:47.750735 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e92834946d2d7421ec0755e6ed699a87e22dc84ea665a71967ebd27b38ede6c2"} err="failed to get container status \"e92834946d2d7421ec0755e6ed699a87e22dc84ea665a71967ebd27b38ede6c2\": rpc error: code = NotFound desc = could not find container \"e92834946d2d7421ec0755e6ed699a87e22dc84ea665a71967ebd27b38ede6c2\": container with ID starting with e92834946d2d7421ec0755e6ed699a87e22dc84ea665a71967ebd27b38ede6c2 not found: ID does not exist" Feb 19 10:29:47 crc kubenswrapper[4811]: I0219 10:29:47.781544 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/086be806-4a18-4001-baae-5d3a05b4ed2a-logs\") pod \"086be806-4a18-4001-baae-5d3a05b4ed2a\" (UID: \"086be806-4a18-4001-baae-5d3a05b4ed2a\") " Feb 19 10:29:47 crc kubenswrapper[4811]: I0219 10:29:47.781811 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qfcl\" (UniqueName: \"kubernetes.io/projected/086be806-4a18-4001-baae-5d3a05b4ed2a-kube-api-access-2qfcl\") pod \"086be806-4a18-4001-baae-5d3a05b4ed2a\" (UID: \"086be806-4a18-4001-baae-5d3a05b4ed2a\") " Feb 19 10:29:47 crc kubenswrapper[4811]: I0219 10:29:47.781881 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/086be806-4a18-4001-baae-5d3a05b4ed2a-config-data\") pod \"086be806-4a18-4001-baae-5d3a05b4ed2a\" (UID: \"086be806-4a18-4001-baae-5d3a05b4ed2a\") " Feb 19 10:29:47 crc kubenswrapper[4811]: I0219 10:29:47.782107 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/086be806-4a18-4001-baae-5d3a05b4ed2a-combined-ca-bundle\") pod \"086be806-4a18-4001-baae-5d3a05b4ed2a\" (UID: \"086be806-4a18-4001-baae-5d3a05b4ed2a\") " Feb 19 10:29:47 crc kubenswrapper[4811]: I0219 10:29:47.782760 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/086be806-4a18-4001-baae-5d3a05b4ed2a-logs" (OuterVolumeSpecName: "logs") pod "086be806-4a18-4001-baae-5d3a05b4ed2a" (UID: "086be806-4a18-4001-baae-5d3a05b4ed2a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:29:47 crc kubenswrapper[4811]: I0219 10:29:47.788730 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/086be806-4a18-4001-baae-5d3a05b4ed2a-kube-api-access-2qfcl" (OuterVolumeSpecName: "kube-api-access-2qfcl") pod "086be806-4a18-4001-baae-5d3a05b4ed2a" (UID: "086be806-4a18-4001-baae-5d3a05b4ed2a"). InnerVolumeSpecName "kube-api-access-2qfcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:29:47 crc kubenswrapper[4811]: I0219 10:29:47.814147 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/086be806-4a18-4001-baae-5d3a05b4ed2a-config-data" (OuterVolumeSpecName: "config-data") pod "086be806-4a18-4001-baae-5d3a05b4ed2a" (UID: "086be806-4a18-4001-baae-5d3a05b4ed2a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:47 crc kubenswrapper[4811]: I0219 10:29:47.825657 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/086be806-4a18-4001-baae-5d3a05b4ed2a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "086be806-4a18-4001-baae-5d3a05b4ed2a" (UID: "086be806-4a18-4001-baae-5d3a05b4ed2a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:29:47 crc kubenswrapper[4811]: I0219 10:29:47.884757 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qfcl\" (UniqueName: \"kubernetes.io/projected/086be806-4a18-4001-baae-5d3a05b4ed2a-kube-api-access-2qfcl\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:47 crc kubenswrapper[4811]: I0219 10:29:47.884796 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/086be806-4a18-4001-baae-5d3a05b4ed2a-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:47 crc kubenswrapper[4811]: I0219 10:29:47.884805 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/086be806-4a18-4001-baae-5d3a05b4ed2a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:47 crc kubenswrapper[4811]: I0219 10:29:47.884814 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/086be806-4a18-4001-baae-5d3a05b4ed2a-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:29:48 crc kubenswrapper[4811]: I0219 10:29:48.043592 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 19 10:29:48 crc kubenswrapper[4811]: I0219 10:29:48.091685 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 10:29:48 crc kubenswrapper[4811]: I0219 10:29:48.091771 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 10:29:48 crc kubenswrapper[4811]: I0219 10:29:48.136184 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:29:48 crc kubenswrapper[4811]: I0219 10:29:48.146573 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:29:48 crc kubenswrapper[4811]: I0219 10:29:48.165145 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 10:29:48 crc kubenswrapper[4811]: E0219 10:29:48.166476 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="086be806-4a18-4001-baae-5d3a05b4ed2a" containerName="nova-api-log" Feb 19 10:29:48 crc kubenswrapper[4811]: I0219 10:29:48.166507 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="086be806-4a18-4001-baae-5d3a05b4ed2a" containerName="nova-api-log" Feb 19 10:29:48 crc kubenswrapper[4811]: E0219 10:29:48.166558 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="086be806-4a18-4001-baae-5d3a05b4ed2a" containerName="nova-api-api" Feb 19 10:29:48 crc kubenswrapper[4811]: I0219 10:29:48.166568 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="086be806-4a18-4001-baae-5d3a05b4ed2a" containerName="nova-api-api" Feb 19 10:29:48 crc kubenswrapper[4811]: I0219 10:29:48.167063 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="086be806-4a18-4001-baae-5d3a05b4ed2a" containerName="nova-api-api" Feb 19 10:29:48 crc kubenswrapper[4811]: I0219 10:29:48.167121 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="086be806-4a18-4001-baae-5d3a05b4ed2a" containerName="nova-api-log" Feb 19 10:29:48 crc kubenswrapper[4811]: I0219 10:29:48.177191 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:29:48 crc kubenswrapper[4811]: I0219 10:29:48.181358 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 10:29:48 crc kubenswrapper[4811]: I0219 10:29:48.253589 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:29:48 crc kubenswrapper[4811]: I0219 10:29:48.314994 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50acbe06-313b-43f2-a4ee-97de3f8778de-config-data\") pod \"nova-api-0\" (UID: \"50acbe06-313b-43f2-a4ee-97de3f8778de\") " pod="openstack/nova-api-0" Feb 19 10:29:48 crc kubenswrapper[4811]: I0219 10:29:48.315050 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50acbe06-313b-43f2-a4ee-97de3f8778de-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"50acbe06-313b-43f2-a4ee-97de3f8778de\") " pod="openstack/nova-api-0" Feb 19 10:29:48 crc kubenswrapper[4811]: I0219 10:29:48.315068 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50acbe06-313b-43f2-a4ee-97de3f8778de-logs\") pod \"nova-api-0\" (UID: \"50acbe06-313b-43f2-a4ee-97de3f8778de\") " pod="openstack/nova-api-0" Feb 19 10:29:48 crc kubenswrapper[4811]: I0219 10:29:48.315100 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjkfp\" (UniqueName: \"kubernetes.io/projected/50acbe06-313b-43f2-a4ee-97de3f8778de-kube-api-access-sjkfp\") pod \"nova-api-0\" (UID: \"50acbe06-313b-43f2-a4ee-97de3f8778de\") " pod="openstack/nova-api-0" Feb 19 10:29:48 crc kubenswrapper[4811]: I0219 10:29:48.416803 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50acbe06-313b-43f2-a4ee-97de3f8778de-config-data\") pod \"nova-api-0\" (UID: \"50acbe06-313b-43f2-a4ee-97de3f8778de\") " pod="openstack/nova-api-0" Feb 19 10:29:48 crc kubenswrapper[4811]: I0219 10:29:48.416857 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50acbe06-313b-43f2-a4ee-97de3f8778de-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"50acbe06-313b-43f2-a4ee-97de3f8778de\") " pod="openstack/nova-api-0" Feb 19 10:29:48 crc kubenswrapper[4811]: I0219 10:29:48.416874 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50acbe06-313b-43f2-a4ee-97de3f8778de-logs\") pod \"nova-api-0\" (UID: \"50acbe06-313b-43f2-a4ee-97de3f8778de\") " pod="openstack/nova-api-0" Feb 19 10:29:48 crc kubenswrapper[4811]: I0219 10:29:48.416907 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjkfp\" (UniqueName: \"kubernetes.io/projected/50acbe06-313b-43f2-a4ee-97de3f8778de-kube-api-access-sjkfp\") pod \"nova-api-0\" (UID: \"50acbe06-313b-43f2-a4ee-97de3f8778de\") " pod="openstack/nova-api-0" Feb 19 10:29:48 crc kubenswrapper[4811]: I0219 10:29:48.417783 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50acbe06-313b-43f2-a4ee-97de3f8778de-logs\") pod \"nova-api-0\" (UID: \"50acbe06-313b-43f2-a4ee-97de3f8778de\") " pod="openstack/nova-api-0" Feb 19 10:29:48 crc kubenswrapper[4811]: I0219 10:29:48.425747 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50acbe06-313b-43f2-a4ee-97de3f8778de-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"50acbe06-313b-43f2-a4ee-97de3f8778de\") " pod="openstack/nova-api-0" Feb 19 10:29:48 crc kubenswrapper[4811]: I0219 10:29:48.434401 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50acbe06-313b-43f2-a4ee-97de3f8778de-config-data\") pod \"nova-api-0\" (UID: \"50acbe06-313b-43f2-a4ee-97de3f8778de\") " pod="openstack/nova-api-0" Feb 19 10:29:48 crc kubenswrapper[4811]: I0219 10:29:48.434600 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjkfp\" (UniqueName: \"kubernetes.io/projected/50acbe06-313b-43f2-a4ee-97de3f8778de-kube-api-access-sjkfp\") pod \"nova-api-0\" (UID: \"50acbe06-313b-43f2-a4ee-97de3f8778de\") " pod="openstack/nova-api-0" Feb 19 10:29:48 crc kubenswrapper[4811]: I0219 10:29:48.522118 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:29:48 crc kubenswrapper[4811]: I0219 10:29:48.596949 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="086be806-4a18-4001-baae-5d3a05b4ed2a" path="/var/lib/kubelet/pods/086be806-4a18-4001-baae-5d3a05b4ed2a/volumes" Feb 19 10:29:48 crc kubenswrapper[4811]: I0219 10:29:48.598463 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae" path="/var/lib/kubelet/pods/1c74ef03-5b63-47a6-b9a8-3f1787d9d7ae/volumes" Feb 19 10:29:48 crc kubenswrapper[4811]: I0219 10:29:48.718402 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"79d85ecb-0aba-485f-a679-452a09aed5d4","Type":"ContainerStarted","Data":"2505d5161928a585652b9fbf661b0d4c9d170e5ecf7bf8585256977079bbd621"} Feb 19 10:29:48 crc kubenswrapper[4811]: I0219 10:29:48.743843 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.7438240609999998 podStartE2EDuration="2.743824061s" podCreationTimestamp="2026-02-19 10:29:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:29:48.737324685 +0000 UTC m=+1277.263789391" watchObservedRunningTime="2026-02-19 10:29:48.743824061 +0000 UTC m=+1277.270288747" Feb 19 10:29:49 crc kubenswrapper[4811]: I0219 10:29:49.040015 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:29:49 crc kubenswrapper[4811]: W0219 10:29:49.053572 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50acbe06_313b_43f2_a4ee_97de3f8778de.slice/crio-d763bd6d738c976653e675d1a85c454914820a1810f6a334aaf0450cc2740c45 WatchSource:0}: Error finding container d763bd6d738c976653e675d1a85c454914820a1810f6a334aaf0450cc2740c45: Status 404 returned error can't find the container with id d763bd6d738c976653e675d1a85c454914820a1810f6a334aaf0450cc2740c45 Feb 19 10:29:49 crc kubenswrapper[4811]: I0219 10:29:49.731637 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"50acbe06-313b-43f2-a4ee-97de3f8778de","Type":"ContainerStarted","Data":"71775cbab2af3b98e88acaa4c86a86a0b027cd2a90155abee206d8ef73e4dd45"} Feb 19 10:29:49 crc kubenswrapper[4811]: I0219 10:29:49.731957 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"50acbe06-313b-43f2-a4ee-97de3f8778de","Type":"ContainerStarted","Data":"07d73aab320176f070e630c7916cd6a4c29486d0d61c09ce11c36ca7d4e07f04"} Feb 19 10:29:49 crc kubenswrapper[4811]: I0219 10:29:49.731973 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"50acbe06-313b-43f2-a4ee-97de3f8778de","Type":"ContainerStarted","Data":"d763bd6d738c976653e675d1a85c454914820a1810f6a334aaf0450cc2740c45"} Feb 19 10:29:49 crc kubenswrapper[4811]: I0219 10:29:49.758532 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.758511752 podStartE2EDuration="1.758511752s" podCreationTimestamp="2026-02-19 10:29:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:29:49.749268356 +0000 UTC m=+1278.275733042" watchObservedRunningTime="2026-02-19 10:29:49.758511752 +0000 UTC m=+1278.284976438" Feb 19 10:29:51 crc kubenswrapper[4811]: I0219 10:29:51.775885 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 19 10:29:52 crc kubenswrapper[4811]: I0219 10:29:52.030150 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 10:29:53 crc kubenswrapper[4811]: I0219 10:29:53.092693 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 10:29:53 crc kubenswrapper[4811]: I0219 10:29:53.093173 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 10:29:54 crc kubenswrapper[4811]: I0219 10:29:54.106686 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a1e765f8-1094-4b4e-a5a2-5dc86e28c08d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 10:29:54 crc kubenswrapper[4811]: I0219 10:29:54.106717 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a1e765f8-1094-4b4e-a5a2-5dc86e28c08d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 10:29:57 crc kubenswrapper[4811]: I0219 10:29:57.029790 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 10:29:57 crc kubenswrapper[4811]: I0219 10:29:57.069396 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 10:29:57 crc kubenswrapper[4811]: I0219 10:29:57.837541 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 10:29:58 crc kubenswrapper[4811]: I0219 10:29:58.522664 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 10:29:58 crc kubenswrapper[4811]: I0219 10:29:58.523133 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 10:29:59 crc kubenswrapper[4811]: I0219 10:29:59.604872 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="50acbe06-313b-43f2-a4ee-97de3f8778de" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.198:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 10:29:59 crc kubenswrapper[4811]: I0219 10:29:59.605179 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="50acbe06-313b-43f2-a4ee-97de3f8778de" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.198:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 10:30:00 crc kubenswrapper[4811]: I0219 10:30:00.169525 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524950-5mlkb"] Feb 19 10:30:00 crc kubenswrapper[4811]: I0219 10:30:00.171666 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-5mlkb" Feb 19 10:30:00 crc kubenswrapper[4811]: I0219 10:30:00.174778 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 10:30:00 crc kubenswrapper[4811]: I0219 10:30:00.175157 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 10:30:00 crc kubenswrapper[4811]: I0219 10:30:00.183853 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524950-5mlkb"] Feb 19 10:30:00 crc kubenswrapper[4811]: I0219 10:30:00.268368 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/61a69487-4b64-4509-85a9-0600a5b4e958-secret-volume\") pod \"collect-profiles-29524950-5mlkb\" (UID: \"61a69487-4b64-4509-85a9-0600a5b4e958\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-5mlkb" Feb 19 10:30:00 crc kubenswrapper[4811]: I0219 10:30:00.268549 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcmbn\" (UniqueName: \"kubernetes.io/projected/61a69487-4b64-4509-85a9-0600a5b4e958-kube-api-access-qcmbn\") pod \"collect-profiles-29524950-5mlkb\" (UID: \"61a69487-4b64-4509-85a9-0600a5b4e958\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-5mlkb" Feb 19 10:30:00 crc kubenswrapper[4811]: I0219 10:30:00.268727 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61a69487-4b64-4509-85a9-0600a5b4e958-config-volume\") pod \"collect-profiles-29524950-5mlkb\" (UID: \"61a69487-4b64-4509-85a9-0600a5b4e958\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-5mlkb" Feb 19 10:30:00 crc kubenswrapper[4811]: I0219 10:30:00.371021 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcmbn\" (UniqueName: \"kubernetes.io/projected/61a69487-4b64-4509-85a9-0600a5b4e958-kube-api-access-qcmbn\") pod \"collect-profiles-29524950-5mlkb\" (UID: \"61a69487-4b64-4509-85a9-0600a5b4e958\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-5mlkb" Feb 19 10:30:00 crc kubenswrapper[4811]: I0219 10:30:00.371213 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61a69487-4b64-4509-85a9-0600a5b4e958-config-volume\") pod \"collect-profiles-29524950-5mlkb\" (UID: \"61a69487-4b64-4509-85a9-0600a5b4e958\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-5mlkb" Feb 19 10:30:00 crc kubenswrapper[4811]: I0219 10:30:00.371304 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/61a69487-4b64-4509-85a9-0600a5b4e958-secret-volume\") pod \"collect-profiles-29524950-5mlkb\" (UID: \"61a69487-4b64-4509-85a9-0600a5b4e958\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-5mlkb" Feb 19 10:30:00 crc kubenswrapper[4811]: I0219 10:30:00.384278 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/61a69487-4b64-4509-85a9-0600a5b4e958-secret-volume\") pod \"collect-profiles-29524950-5mlkb\" (UID: \"61a69487-4b64-4509-85a9-0600a5b4e958\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-5mlkb" Feb 19 10:30:00 crc kubenswrapper[4811]: I0219 10:30:00.385665 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61a69487-4b64-4509-85a9-0600a5b4e958-config-volume\") pod \"collect-profiles-29524950-5mlkb\" (UID: \"61a69487-4b64-4509-85a9-0600a5b4e958\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-5mlkb" Feb 19 10:30:00 crc kubenswrapper[4811]: I0219 10:30:00.388334 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcmbn\" (UniqueName: \"kubernetes.io/projected/61a69487-4b64-4509-85a9-0600a5b4e958-kube-api-access-qcmbn\") pod \"collect-profiles-29524950-5mlkb\" (UID: \"61a69487-4b64-4509-85a9-0600a5b4e958\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-5mlkb" Feb 19 10:30:00 crc kubenswrapper[4811]: I0219 10:30:00.490895 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-5mlkb" Feb 19 10:30:01 crc kubenswrapper[4811]: I0219 10:30:01.004233 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524950-5mlkb"] Feb 19 10:30:01 crc kubenswrapper[4811]: W0219 10:30:01.011240 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61a69487_4b64_4509_85a9_0600a5b4e958.slice/crio-a7f22f2b357954b5539e2690cdbb8595a6a779584de51f9b2b8d72889080f1c8 WatchSource:0}: Error finding container a7f22f2b357954b5539e2690cdbb8595a6a779584de51f9b2b8d72889080f1c8: Status 404 returned error can't find the container with id a7f22f2b357954b5539e2690cdbb8595a6a779584de51f9b2b8d72889080f1c8 Feb 19 10:30:01 crc kubenswrapper[4811]: I0219 10:30:01.850026 4811 generic.go:334] "Generic (PLEG): container finished" podID="61a69487-4b64-4509-85a9-0600a5b4e958" containerID="22ead511b2c643cb1f40b13dea790ef572e7e2eedf491d311e21a0b5d81706ec" exitCode=0 Feb 19 10:30:01 crc kubenswrapper[4811]: I0219 10:30:01.850088 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-5mlkb" event={"ID":"61a69487-4b64-4509-85a9-0600a5b4e958","Type":"ContainerDied","Data":"22ead511b2c643cb1f40b13dea790ef572e7e2eedf491d311e21a0b5d81706ec"} Feb 19 10:30:01 crc kubenswrapper[4811]: I0219 10:30:01.850117 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-5mlkb" event={"ID":"61a69487-4b64-4509-85a9-0600a5b4e958","Type":"ContainerStarted","Data":"a7f22f2b357954b5539e2690cdbb8595a6a779584de51f9b2b8d72889080f1c8"} Feb 19 10:30:03 crc kubenswrapper[4811]: I0219 10:30:03.099558 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 10:30:03 crc kubenswrapper[4811]: I0219 10:30:03.102132 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 10:30:03 crc kubenswrapper[4811]: I0219 10:30:03.111703 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 10:30:03 crc kubenswrapper[4811]: I0219 10:30:03.246773 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-5mlkb" Feb 19 10:30:03 crc kubenswrapper[4811]: I0219 10:30:03.336195 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcmbn\" (UniqueName: \"kubernetes.io/projected/61a69487-4b64-4509-85a9-0600a5b4e958-kube-api-access-qcmbn\") pod \"61a69487-4b64-4509-85a9-0600a5b4e958\" (UID: \"61a69487-4b64-4509-85a9-0600a5b4e958\") " Feb 19 10:30:03 crc kubenswrapper[4811]: I0219 10:30:03.336249 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/61a69487-4b64-4509-85a9-0600a5b4e958-secret-volume\") pod \"61a69487-4b64-4509-85a9-0600a5b4e958\" (UID: \"61a69487-4b64-4509-85a9-0600a5b4e958\") " Feb 19 10:30:03 crc kubenswrapper[4811]: I0219 10:30:03.336280 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61a69487-4b64-4509-85a9-0600a5b4e958-config-volume\") pod \"61a69487-4b64-4509-85a9-0600a5b4e958\" (UID: \"61a69487-4b64-4509-85a9-0600a5b4e958\") " Feb 19 10:30:03 crc kubenswrapper[4811]: I0219 10:30:03.337360 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61a69487-4b64-4509-85a9-0600a5b4e958-config-volume" (OuterVolumeSpecName: "config-volume") pod "61a69487-4b64-4509-85a9-0600a5b4e958" (UID: "61a69487-4b64-4509-85a9-0600a5b4e958"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:30:03 crc kubenswrapper[4811]: I0219 10:30:03.344756 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61a69487-4b64-4509-85a9-0600a5b4e958-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "61a69487-4b64-4509-85a9-0600a5b4e958" (UID: "61a69487-4b64-4509-85a9-0600a5b4e958"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:30:03 crc kubenswrapper[4811]: I0219 10:30:03.345038 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61a69487-4b64-4509-85a9-0600a5b4e958-kube-api-access-qcmbn" (OuterVolumeSpecName: "kube-api-access-qcmbn") pod "61a69487-4b64-4509-85a9-0600a5b4e958" (UID: "61a69487-4b64-4509-85a9-0600a5b4e958"). InnerVolumeSpecName "kube-api-access-qcmbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:30:03 crc kubenswrapper[4811]: I0219 10:30:03.440299 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcmbn\" (UniqueName: \"kubernetes.io/projected/61a69487-4b64-4509-85a9-0600a5b4e958-kube-api-access-qcmbn\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:03 crc kubenswrapper[4811]: I0219 10:30:03.440355 4811 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/61a69487-4b64-4509-85a9-0600a5b4e958-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:03 crc kubenswrapper[4811]: I0219 10:30:03.440371 4811 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61a69487-4b64-4509-85a9-0600a5b4e958-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:03 crc kubenswrapper[4811]: I0219 10:30:03.872529 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-5mlkb" Feb 19 10:30:03 crc kubenswrapper[4811]: I0219 10:30:03.872586 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524950-5mlkb" event={"ID":"61a69487-4b64-4509-85a9-0600a5b4e958","Type":"ContainerDied","Data":"a7f22f2b357954b5539e2690cdbb8595a6a779584de51f9b2b8d72889080f1c8"} Feb 19 10:30:03 crc kubenswrapper[4811]: I0219 10:30:03.872620 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7f22f2b357954b5539e2690cdbb8595a6a779584de51f9b2b8d72889080f1c8" Feb 19 10:30:03 crc kubenswrapper[4811]: I0219 10:30:03.879325 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 10:30:04 crc kubenswrapper[4811]: I0219 10:30:04.884940 4811 generic.go:334] "Generic (PLEG): container finished" podID="5b84f775-8294-4ec6-8c83-1c6c3fc11604" containerID="896bcb8e7d4cc5d48b57d535d5a9d69f875f64ef8ef2192834e7c9d243c1a17e" exitCode=137 Feb 19 10:30:04 crc kubenswrapper[4811]: I0219 10:30:04.885109 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5b84f775-8294-4ec6-8c83-1c6c3fc11604","Type":"ContainerDied","Data":"896bcb8e7d4cc5d48b57d535d5a9d69f875f64ef8ef2192834e7c9d243c1a17e"} Feb 19 10:30:04 crc kubenswrapper[4811]: I0219 10:30:04.885570 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5b84f775-8294-4ec6-8c83-1c6c3fc11604","Type":"ContainerDied","Data":"e69f06a9eccd631ba9849a47bd2fd4aac2841f3a97ed5ea557e1df4e0f3eb655"} Feb 19 10:30:04 crc kubenswrapper[4811]: I0219 10:30:04.885592 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e69f06a9eccd631ba9849a47bd2fd4aac2841f3a97ed5ea557e1df4e0f3eb655" Feb 19 10:30:04 crc kubenswrapper[4811]: I0219 10:30:04.912523 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:30:04 crc kubenswrapper[4811]: I0219 10:30:04.972099 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt5qs\" (UniqueName: \"kubernetes.io/projected/5b84f775-8294-4ec6-8c83-1c6c3fc11604-kube-api-access-kt5qs\") pod \"5b84f775-8294-4ec6-8c83-1c6c3fc11604\" (UID: \"5b84f775-8294-4ec6-8c83-1c6c3fc11604\") " Feb 19 10:30:04 crc kubenswrapper[4811]: I0219 10:30:04.972341 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b84f775-8294-4ec6-8c83-1c6c3fc11604-combined-ca-bundle\") pod \"5b84f775-8294-4ec6-8c83-1c6c3fc11604\" (UID: \"5b84f775-8294-4ec6-8c83-1c6c3fc11604\") " Feb 19 10:30:04 crc kubenswrapper[4811]: I0219 10:30:04.972380 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b84f775-8294-4ec6-8c83-1c6c3fc11604-config-data\") pod \"5b84f775-8294-4ec6-8c83-1c6c3fc11604\" (UID: \"5b84f775-8294-4ec6-8c83-1c6c3fc11604\") " Feb 19 10:30:04 crc kubenswrapper[4811]: I0219 10:30:04.982746 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b84f775-8294-4ec6-8c83-1c6c3fc11604-kube-api-access-kt5qs" (OuterVolumeSpecName: "kube-api-access-kt5qs") pod "5b84f775-8294-4ec6-8c83-1c6c3fc11604" (UID: "5b84f775-8294-4ec6-8c83-1c6c3fc11604"). InnerVolumeSpecName "kube-api-access-kt5qs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:30:05 crc kubenswrapper[4811]: I0219 10:30:05.004072 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b84f775-8294-4ec6-8c83-1c6c3fc11604-config-data" (OuterVolumeSpecName: "config-data") pod "5b84f775-8294-4ec6-8c83-1c6c3fc11604" (UID: "5b84f775-8294-4ec6-8c83-1c6c3fc11604"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:30:05 crc kubenswrapper[4811]: I0219 10:30:05.006711 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b84f775-8294-4ec6-8c83-1c6c3fc11604-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b84f775-8294-4ec6-8c83-1c6c3fc11604" (UID: "5b84f775-8294-4ec6-8c83-1c6c3fc11604"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:30:05 crc kubenswrapper[4811]: I0219 10:30:05.076651 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b84f775-8294-4ec6-8c83-1c6c3fc11604-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:05 crc kubenswrapper[4811]: I0219 10:30:05.076693 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b84f775-8294-4ec6-8c83-1c6c3fc11604-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:05 crc kubenswrapper[4811]: I0219 10:30:05.076706 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt5qs\" (UniqueName: \"kubernetes.io/projected/5b84f775-8294-4ec6-8c83-1c6c3fc11604-kube-api-access-kt5qs\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:05 crc kubenswrapper[4811]: I0219 10:30:05.894714 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:30:05 crc kubenswrapper[4811]: I0219 10:30:05.932967 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 10:30:05 crc kubenswrapper[4811]: I0219 10:30:05.955751 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 10:30:05 crc kubenswrapper[4811]: I0219 10:30:05.967015 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 10:30:05 crc kubenswrapper[4811]: E0219 10:30:05.967507 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b84f775-8294-4ec6-8c83-1c6c3fc11604" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 10:30:05 crc kubenswrapper[4811]: I0219 10:30:05.967544 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b84f775-8294-4ec6-8c83-1c6c3fc11604" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 10:30:05 crc kubenswrapper[4811]: E0219 10:30:05.967574 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a69487-4b64-4509-85a9-0600a5b4e958" containerName="collect-profiles" Feb 19 10:30:05 crc kubenswrapper[4811]: I0219 10:30:05.967582 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a69487-4b64-4509-85a9-0600a5b4e958" containerName="collect-profiles" Feb 19 10:30:05 crc kubenswrapper[4811]: I0219 10:30:05.967797 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="61a69487-4b64-4509-85a9-0600a5b4e958" containerName="collect-profiles" Feb 19 10:30:05 crc kubenswrapper[4811]: I0219 10:30:05.967816 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b84f775-8294-4ec6-8c83-1c6c3fc11604" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 10:30:05 crc kubenswrapper[4811]: I0219 10:30:05.968692 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:30:05 crc kubenswrapper[4811]: I0219 10:30:05.978234 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 19 10:30:05 crc kubenswrapper[4811]: I0219 10:30:05.978627 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 19 10:30:05 crc kubenswrapper[4811]: I0219 10:30:05.979205 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 19 10:30:06 crc kubenswrapper[4811]: I0219 10:30:06.002952 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 10:30:06 crc kubenswrapper[4811]: I0219 10:30:06.099000 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/66690abb-6530-48ba-ab47-36369217c473-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"66690abb-6530-48ba-ab47-36369217c473\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:30:06 crc kubenswrapper[4811]: I0219 10:30:06.099471 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66690abb-6530-48ba-ab47-36369217c473-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"66690abb-6530-48ba-ab47-36369217c473\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:30:06 crc kubenswrapper[4811]: I0219 10:30:06.099886 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66690abb-6530-48ba-ab47-36369217c473-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"66690abb-6530-48ba-ab47-36369217c473\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:30:06 crc kubenswrapper[4811]: I0219 10:30:06.100067 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/66690abb-6530-48ba-ab47-36369217c473-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"66690abb-6530-48ba-ab47-36369217c473\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:30:06 crc kubenswrapper[4811]: I0219 10:30:06.100187 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt8bt\" (UniqueName: \"kubernetes.io/projected/66690abb-6530-48ba-ab47-36369217c473-kube-api-access-pt8bt\") pod \"nova-cell1-novncproxy-0\" (UID: \"66690abb-6530-48ba-ab47-36369217c473\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:30:06 crc kubenswrapper[4811]: I0219 10:30:06.202897 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt8bt\" (UniqueName: \"kubernetes.io/projected/66690abb-6530-48ba-ab47-36369217c473-kube-api-access-pt8bt\") pod \"nova-cell1-novncproxy-0\" (UID: \"66690abb-6530-48ba-ab47-36369217c473\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:30:06 crc kubenswrapper[4811]: I0219 10:30:06.202973 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/66690abb-6530-48ba-ab47-36369217c473-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"66690abb-6530-48ba-ab47-36369217c473\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:30:06 crc kubenswrapper[4811]: I0219 10:30:06.203145 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/66690abb-6530-48ba-ab47-36369217c473-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"66690abb-6530-48ba-ab47-36369217c473\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:30:06 crc kubenswrapper[4811]: I0219 10:30:06.203205 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66690abb-6530-48ba-ab47-36369217c473-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"66690abb-6530-48ba-ab47-36369217c473\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:30:06 crc kubenswrapper[4811]: I0219 10:30:06.203266 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66690abb-6530-48ba-ab47-36369217c473-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"66690abb-6530-48ba-ab47-36369217c473\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:30:06 crc kubenswrapper[4811]: I0219 10:30:06.208209 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66690abb-6530-48ba-ab47-36369217c473-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"66690abb-6530-48ba-ab47-36369217c473\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:30:06 crc kubenswrapper[4811]: I0219 10:30:06.209292 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/66690abb-6530-48ba-ab47-36369217c473-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"66690abb-6530-48ba-ab47-36369217c473\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:30:06 crc kubenswrapper[4811]: I0219 10:30:06.209636 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66690abb-6530-48ba-ab47-36369217c473-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"66690abb-6530-48ba-ab47-36369217c473\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:30:06 crc kubenswrapper[4811]: I0219 10:30:06.217196 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/66690abb-6530-48ba-ab47-36369217c473-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"66690abb-6530-48ba-ab47-36369217c473\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:30:06 crc kubenswrapper[4811]: I0219 10:30:06.222290 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt8bt\" (UniqueName: \"kubernetes.io/projected/66690abb-6530-48ba-ab47-36369217c473-kube-api-access-pt8bt\") pod \"nova-cell1-novncproxy-0\" (UID: \"66690abb-6530-48ba-ab47-36369217c473\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:30:06 crc kubenswrapper[4811]: I0219 10:30:06.288412 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:30:06 crc kubenswrapper[4811]: I0219 10:30:06.596018 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b84f775-8294-4ec6-8c83-1c6c3fc11604" path="/var/lib/kubelet/pods/5b84f775-8294-4ec6-8c83-1c6c3fc11604/volumes" Feb 19 10:30:06 crc kubenswrapper[4811]: I0219 10:30:06.769629 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 10:30:06 crc kubenswrapper[4811]: W0219 10:30:06.777300 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66690abb_6530_48ba_ab47_36369217c473.slice/crio-0d5f65683b5604954021d7bd45265633d010190cc63b23483b07216bd5bfc1c0 WatchSource:0}: Error finding container 0d5f65683b5604954021d7bd45265633d010190cc63b23483b07216bd5bfc1c0: Status 404 returned error can't find the container with id 0d5f65683b5604954021d7bd45265633d010190cc63b23483b07216bd5bfc1c0 Feb 19 10:30:06 crc kubenswrapper[4811]: I0219 10:30:06.909073 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"66690abb-6530-48ba-ab47-36369217c473","Type":"ContainerStarted","Data":"0d5f65683b5604954021d7bd45265633d010190cc63b23483b07216bd5bfc1c0"} Feb 19 10:30:07 crc kubenswrapper[4811]: I0219 10:30:07.923365 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"66690abb-6530-48ba-ab47-36369217c473","Type":"ContainerStarted","Data":"804cb9ad1ee54d177005cac561c2d3d7a61c4abc569fdf069896f30bb5fcfc48"} Feb 19 10:30:07 crc kubenswrapper[4811]: I0219 10:30:07.945863 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.945817464 podStartE2EDuration="2.945817464s" podCreationTimestamp="2026-02-19 10:30:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:30:07.942269253 +0000 UTC m=+1296.468733969" watchObservedRunningTime="2026-02-19 10:30:07.945817464 +0000 UTC m=+1296.472282160" Feb 19 10:30:08 crc kubenswrapper[4811]: I0219 10:30:08.527788 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 10:30:08 crc kubenswrapper[4811]: I0219 10:30:08.528826 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 10:30:08 crc kubenswrapper[4811]: I0219 10:30:08.531810 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 10:30:08 crc kubenswrapper[4811]: I0219 10:30:08.534586 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 10:30:08 crc kubenswrapper[4811]: I0219 10:30:08.935376 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 10:30:08 crc kubenswrapper[4811]: I0219 10:30:08.937788 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 10:30:09 crc kubenswrapper[4811]: I0219 10:30:09.094843 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-rfmsw"] Feb 19 10:30:09 crc kubenswrapper[4811]: I0219 10:30:09.099345 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-rfmsw" Feb 19 10:30:09 crc kubenswrapper[4811]: I0219 10:30:09.108472 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-rfmsw"] Feb 19 10:30:09 crc kubenswrapper[4811]: I0219 10:30:09.178118 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-rfmsw\" (UID: \"9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rfmsw" Feb 19 10:30:09 crc kubenswrapper[4811]: I0219 10:30:09.178259 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-rfmsw\" (UID: \"9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rfmsw" Feb 19 10:30:09 crc kubenswrapper[4811]: I0219 10:30:09.178716 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-rfmsw\" (UID: \"9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rfmsw" Feb 19 10:30:09 crc kubenswrapper[4811]: I0219 10:30:09.178826 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca-config\") pod \"dnsmasq-dns-89c5cd4d5-rfmsw\" (UID: \"9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rfmsw" Feb 19 10:30:09 crc kubenswrapper[4811]: I0219 10:30:09.178902 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-rfmsw\" (UID: \"9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rfmsw" Feb 19 10:30:09 crc kubenswrapper[4811]: I0219 10:30:09.178972 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqjzb\" (UniqueName: \"kubernetes.io/projected/9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca-kube-api-access-hqjzb\") pod \"dnsmasq-dns-89c5cd4d5-rfmsw\" (UID: \"9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rfmsw" Feb 19 10:30:09 crc kubenswrapper[4811]: I0219 10:30:09.281157 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-rfmsw\" (UID: \"9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rfmsw" Feb 19 10:30:09 crc kubenswrapper[4811]: I0219 10:30:09.281509 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-rfmsw\" (UID: \"9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rfmsw" Feb 19 10:30:09 crc kubenswrapper[4811]: I0219 10:30:09.281659 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-rfmsw\" (UID: \"9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rfmsw" Feb 19 10:30:09 crc kubenswrapper[4811]: I0219 10:30:09.281751 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca-config\") pod \"dnsmasq-dns-89c5cd4d5-rfmsw\" (UID: \"9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rfmsw" Feb 19 10:30:09 crc kubenswrapper[4811]: I0219 10:30:09.281875 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-rfmsw\" (UID: \"9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rfmsw" Feb 19 10:30:09 crc kubenswrapper[4811]: I0219 10:30:09.281987 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqjzb\" (UniqueName: \"kubernetes.io/projected/9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca-kube-api-access-hqjzb\") pod \"dnsmasq-dns-89c5cd4d5-rfmsw\" (UID: \"9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rfmsw" Feb 19 10:30:09 crc kubenswrapper[4811]: I0219 10:30:09.282495 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-rfmsw\" (UID: \"9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rfmsw" Feb 19 10:30:09 crc kubenswrapper[4811]: I0219 10:30:09.282577 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-rfmsw\" (UID: \"9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rfmsw" Feb 19 10:30:09 crc kubenswrapper[4811]: I0219 10:30:09.282585 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-rfmsw\" (UID: \"9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rfmsw" Feb 19 10:30:09 crc kubenswrapper[4811]: I0219 10:30:09.282821 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca-config\") pod \"dnsmasq-dns-89c5cd4d5-rfmsw\" (UID: \"9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rfmsw" Feb 19 10:30:09 crc kubenswrapper[4811]: I0219 10:30:09.283412 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-rfmsw\" (UID: \"9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rfmsw" Feb 19 10:30:09 crc kubenswrapper[4811]: I0219 10:30:09.313053 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqjzb\" (UniqueName: \"kubernetes.io/projected/9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca-kube-api-access-hqjzb\") pod \"dnsmasq-dns-89c5cd4d5-rfmsw\" (UID: \"9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rfmsw" Feb 19 10:30:09 crc kubenswrapper[4811]: I0219 10:30:09.434883 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-rfmsw" Feb 19 10:30:10 crc kubenswrapper[4811]: I0219 10:30:10.010737 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-rfmsw"] Feb 19 10:30:10 crc kubenswrapper[4811]: I0219 10:30:10.964601 4811 generic.go:334] "Generic (PLEG): container finished" podID="9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca" containerID="0c2fce79cd7c6b05d66a045b2c3061f296915748d992c8ed72007fb44b5e3133" exitCode=0 Feb 19 10:30:10 crc kubenswrapper[4811]: I0219 10:30:10.964699 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-rfmsw" event={"ID":"9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca","Type":"ContainerDied","Data":"0c2fce79cd7c6b05d66a045b2c3061f296915748d992c8ed72007fb44b5e3133"} Feb 19 10:30:10 crc kubenswrapper[4811]: I0219 10:30:10.965325 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-rfmsw" event={"ID":"9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca","Type":"ContainerStarted","Data":"e5c033b71c1a1caea5acb82aedba4145a239a20342ef81286d166b023a8cc3a4"} Feb 19 10:30:11 crc kubenswrapper[4811]: I0219 10:30:11.289476 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:30:11 crc kubenswrapper[4811]: I0219 10:30:11.388276 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:30:11 crc kubenswrapper[4811]: I0219 10:30:11.389036 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6455a4c1-e3c5-4852-ae89-11be1b0c3b98" containerName="ceilometer-central-agent" containerID="cri-o://9118c4e25ef4eab041c586a685c361d49ea3a31df89267c418da02f512411d58" gracePeriod=30 Feb 19 10:30:11 crc kubenswrapper[4811]: I0219 10:30:11.389094 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6455a4c1-e3c5-4852-ae89-11be1b0c3b98" containerName="sg-core" containerID="cri-o://a2e8e603045076a45f6c6c5b9ae04a9ec7f8a707021c8ed61ed075484a0f6df2" gracePeriod=30 Feb 19 10:30:11 crc kubenswrapper[4811]: I0219 10:30:11.389213 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6455a4c1-e3c5-4852-ae89-11be1b0c3b98" containerName="ceilometer-notification-agent" containerID="cri-o://12870aa7025ecd628c0dc0ac428535905cb3222e5e60bf333fd73df42be9c674" gracePeriod=30 Feb 19 10:30:11 crc kubenswrapper[4811]: I0219 10:30:11.389221 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6455a4c1-e3c5-4852-ae89-11be1b0c3b98" containerName="proxy-httpd" containerID="cri-o://26907b58e203ded2c3a246f24f10e058a93e8a255492259719dc2ea2ca388546" gracePeriod=30 Feb 19 10:30:11 crc kubenswrapper[4811]: I0219 10:30:11.730665 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:30:11 crc kubenswrapper[4811]: I0219 10:30:11.975989 4811 generic.go:334] "Generic (PLEG): container finished" podID="6455a4c1-e3c5-4852-ae89-11be1b0c3b98" containerID="26907b58e203ded2c3a246f24f10e058a93e8a255492259719dc2ea2ca388546" exitCode=0 Feb 19 10:30:11 crc kubenswrapper[4811]: I0219 10:30:11.976030 4811 generic.go:334] "Generic (PLEG): container finished" podID="6455a4c1-e3c5-4852-ae89-11be1b0c3b98" containerID="a2e8e603045076a45f6c6c5b9ae04a9ec7f8a707021c8ed61ed075484a0f6df2" exitCode=2 Feb 19 10:30:11 crc kubenswrapper[4811]: I0219 10:30:11.976043 4811 generic.go:334] "Generic (PLEG): container finished" podID="6455a4c1-e3c5-4852-ae89-11be1b0c3b98" containerID="9118c4e25ef4eab041c586a685c361d49ea3a31df89267c418da02f512411d58" exitCode=0 Feb 19 10:30:11 crc kubenswrapper[4811]: I0219 10:30:11.976058 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6455a4c1-e3c5-4852-ae89-11be1b0c3b98","Type":"ContainerDied","Data":"26907b58e203ded2c3a246f24f10e058a93e8a255492259719dc2ea2ca388546"} Feb 19 10:30:11 crc kubenswrapper[4811]: I0219 10:30:11.976104 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6455a4c1-e3c5-4852-ae89-11be1b0c3b98","Type":"ContainerDied","Data":"a2e8e603045076a45f6c6c5b9ae04a9ec7f8a707021c8ed61ed075484a0f6df2"} Feb 19 10:30:11 crc kubenswrapper[4811]: I0219 10:30:11.976116 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6455a4c1-e3c5-4852-ae89-11be1b0c3b98","Type":"ContainerDied","Data":"9118c4e25ef4eab041c586a685c361d49ea3a31df89267c418da02f512411d58"} Feb 19 10:30:11 crc kubenswrapper[4811]: I0219 10:30:11.979354 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-rfmsw" event={"ID":"9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca","Type":"ContainerStarted","Data":"ed6eba28b1bf086564ff85722025514cf16eabe98dade58110b9cdad6bdb7467"} Feb 19 10:30:11 crc kubenswrapper[4811]: I0219 10:30:11.979472 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="50acbe06-313b-43f2-a4ee-97de3f8778de" containerName="nova-api-log" containerID="cri-o://07d73aab320176f070e630c7916cd6a4c29486d0d61c09ce11c36ca7d4e07f04" gracePeriod=30 Feb 19 10:30:11 crc kubenswrapper[4811]: I0219 10:30:11.979512 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-rfmsw" Feb 19 10:30:11 crc kubenswrapper[4811]: I0219 10:30:11.979591 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="50acbe06-313b-43f2-a4ee-97de3f8778de" containerName="nova-api-api" containerID="cri-o://71775cbab2af3b98e88acaa4c86a86a0b027cd2a90155abee206d8ef73e4dd45" gracePeriod=30 Feb 19 10:30:12 crc kubenswrapper[4811]: I0219 10:30:12.016353 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-rfmsw" podStartSLOduration=3.016331825 podStartE2EDuration="3.016331825s" podCreationTimestamp="2026-02-19 10:30:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:30:12.005134608 +0000 UTC m=+1300.531599304" watchObservedRunningTime="2026-02-19 10:30:12.016331825 +0000 UTC m=+1300.542796521" Feb 19 10:30:13 crc kubenswrapper[4811]: I0219 10:30:13.003629 4811 generic.go:334] "Generic (PLEG): container finished" podID="6455a4c1-e3c5-4852-ae89-11be1b0c3b98" containerID="12870aa7025ecd628c0dc0ac428535905cb3222e5e60bf333fd73df42be9c674" exitCode=0 Feb 19 10:30:13 crc kubenswrapper[4811]: I0219 10:30:13.004294 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6455a4c1-e3c5-4852-ae89-11be1b0c3b98","Type":"ContainerDied","Data":"12870aa7025ecd628c0dc0ac428535905cb3222e5e60bf333fd73df42be9c674"} Feb 19 10:30:13 crc kubenswrapper[4811]: I0219 10:30:13.008185 4811 generic.go:334] "Generic (PLEG): container finished" podID="50acbe06-313b-43f2-a4ee-97de3f8778de" containerID="07d73aab320176f070e630c7916cd6a4c29486d0d61c09ce11c36ca7d4e07f04" exitCode=143 Feb 19 10:30:13 crc kubenswrapper[4811]: I0219 10:30:13.008313 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"50acbe06-313b-43f2-a4ee-97de3f8778de","Type":"ContainerDied","Data":"07d73aab320176f070e630c7916cd6a4c29486d0d61c09ce11c36ca7d4e07f04"} Feb 19 10:30:13 crc kubenswrapper[4811]: I0219 10:30:13.169995 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:30:13 crc kubenswrapper[4811]: I0219 10:30:13.306948 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-config-data\") pod \"6455a4c1-e3c5-4852-ae89-11be1b0c3b98\" (UID: \"6455a4c1-e3c5-4852-ae89-11be1b0c3b98\") " Feb 19 10:30:13 crc kubenswrapper[4811]: I0219 10:30:13.307022 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-combined-ca-bundle\") pod \"6455a4c1-e3c5-4852-ae89-11be1b0c3b98\" (UID: \"6455a4c1-e3c5-4852-ae89-11be1b0c3b98\") " Feb 19 10:30:13 crc kubenswrapper[4811]: I0219 10:30:13.307128 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-log-httpd\") pod \"6455a4c1-e3c5-4852-ae89-11be1b0c3b98\" (UID: \"6455a4c1-e3c5-4852-ae89-11be1b0c3b98\") " Feb 19 10:30:13 crc kubenswrapper[4811]: I0219 10:30:13.307154 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-ceilometer-tls-certs\") pod \"6455a4c1-e3c5-4852-ae89-11be1b0c3b98\" (UID: \"6455a4c1-e3c5-4852-ae89-11be1b0c3b98\") " Feb 19 10:30:13 crc kubenswrapper[4811]: I0219 10:30:13.307233 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-scripts\") pod \"6455a4c1-e3c5-4852-ae89-11be1b0c3b98\" (UID: \"6455a4c1-e3c5-4852-ae89-11be1b0c3b98\") " Feb 19 10:30:13 crc kubenswrapper[4811]: I0219 10:30:13.307267 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-run-httpd\") pod \"6455a4c1-e3c5-4852-ae89-11be1b0c3b98\" (UID: \"6455a4c1-e3c5-4852-ae89-11be1b0c3b98\") " Feb 19 10:30:13 crc kubenswrapper[4811]: I0219 10:30:13.307296 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pc9h\" (UniqueName: \"kubernetes.io/projected/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-kube-api-access-2pc9h\") pod \"6455a4c1-e3c5-4852-ae89-11be1b0c3b98\" (UID: \"6455a4c1-e3c5-4852-ae89-11be1b0c3b98\") " Feb 19 10:30:13 crc kubenswrapper[4811]: I0219 10:30:13.307360 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-sg-core-conf-yaml\") pod \"6455a4c1-e3c5-4852-ae89-11be1b0c3b98\" (UID: \"6455a4c1-e3c5-4852-ae89-11be1b0c3b98\") " Feb 19 10:30:13 crc kubenswrapper[4811]: I0219 10:30:13.307930 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6455a4c1-e3c5-4852-ae89-11be1b0c3b98" (UID: "6455a4c1-e3c5-4852-ae89-11be1b0c3b98"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:30:13 crc kubenswrapper[4811]: I0219 10:30:13.308473 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6455a4c1-e3c5-4852-ae89-11be1b0c3b98" (UID: "6455a4c1-e3c5-4852-ae89-11be1b0c3b98"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:30:13 crc kubenswrapper[4811]: I0219 10:30:13.314541 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-kube-api-access-2pc9h" (OuterVolumeSpecName: "kube-api-access-2pc9h") pod "6455a4c1-e3c5-4852-ae89-11be1b0c3b98" (UID: "6455a4c1-e3c5-4852-ae89-11be1b0c3b98"). InnerVolumeSpecName "kube-api-access-2pc9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:30:13 crc kubenswrapper[4811]: I0219 10:30:13.316916 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-scripts" (OuterVolumeSpecName: "scripts") pod "6455a4c1-e3c5-4852-ae89-11be1b0c3b98" (UID: "6455a4c1-e3c5-4852-ae89-11be1b0c3b98"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:30:13 crc kubenswrapper[4811]: I0219 10:30:13.371717 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6455a4c1-e3c5-4852-ae89-11be1b0c3b98" (UID: "6455a4c1-e3c5-4852-ae89-11be1b0c3b98"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:30:13 crc kubenswrapper[4811]: I0219 10:30:13.403121 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "6455a4c1-e3c5-4852-ae89-11be1b0c3b98" (UID: "6455a4c1-e3c5-4852-ae89-11be1b0c3b98"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:30:13 crc kubenswrapper[4811]: I0219 10:30:13.410872 4811 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:13 crc kubenswrapper[4811]: I0219 10:30:13.411107 4811 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:13 crc kubenswrapper[4811]: I0219 10:30:13.411190 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:13 crc kubenswrapper[4811]: I0219 10:30:13.411282 4811 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:13 crc kubenswrapper[4811]: I0219 10:30:13.411364 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pc9h\" (UniqueName: \"kubernetes.io/projected/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-kube-api-access-2pc9h\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:13 crc kubenswrapper[4811]: I0219 10:30:13.411536 4811 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:13 crc kubenswrapper[4811]: I0219 10:30:13.424423 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6455a4c1-e3c5-4852-ae89-11be1b0c3b98" (UID: "6455a4c1-e3c5-4852-ae89-11be1b0c3b98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:30:13 crc kubenswrapper[4811]: I0219 10:30:13.432352 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-config-data" (OuterVolumeSpecName: "config-data") pod "6455a4c1-e3c5-4852-ae89-11be1b0c3b98" (UID: "6455a4c1-e3c5-4852-ae89-11be1b0c3b98"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:30:13 crc kubenswrapper[4811]: I0219 10:30:13.513427 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:13 crc kubenswrapper[4811]: I0219 10:30:13.513498 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6455a4c1-e3c5-4852-ae89-11be1b0c3b98-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.021246 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6455a4c1-e3c5-4852-ae89-11be1b0c3b98","Type":"ContainerDied","Data":"29385d7cfe910d905fe23c6da57572488ef955ab5ce7079d5d31240abcca3e24"} Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.021806 4811 scope.go:117] "RemoveContainer" containerID="26907b58e203ded2c3a246f24f10e058a93e8a255492259719dc2ea2ca388546" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.021337 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.054599 4811 scope.go:117] "RemoveContainer" containerID="a2e8e603045076a45f6c6c5b9ae04a9ec7f8a707021c8ed61ed075484a0f6df2" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.057396 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.066951 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.077840 4811 scope.go:117] "RemoveContainer" containerID="12870aa7025ecd628c0dc0ac428535905cb3222e5e60bf333fd73df42be9c674" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.084303 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:30:14 crc kubenswrapper[4811]: E0219 10:30:14.084939 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6455a4c1-e3c5-4852-ae89-11be1b0c3b98" containerName="proxy-httpd" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.084961 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="6455a4c1-e3c5-4852-ae89-11be1b0c3b98" containerName="proxy-httpd" Feb 19 10:30:14 crc kubenswrapper[4811]: E0219 10:30:14.084977 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6455a4c1-e3c5-4852-ae89-11be1b0c3b98" containerName="sg-core" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.084991 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="6455a4c1-e3c5-4852-ae89-11be1b0c3b98" containerName="sg-core" Feb 19 10:30:14 crc kubenswrapper[4811]: E0219 10:30:14.085001 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6455a4c1-e3c5-4852-ae89-11be1b0c3b98" containerName="ceilometer-central-agent" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.085007 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="6455a4c1-e3c5-4852-ae89-11be1b0c3b98" containerName="ceilometer-central-agent" Feb 19 10:30:14 crc kubenswrapper[4811]: E0219 10:30:14.085020 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6455a4c1-e3c5-4852-ae89-11be1b0c3b98" containerName="ceilometer-notification-agent" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.085026 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="6455a4c1-e3c5-4852-ae89-11be1b0c3b98" containerName="ceilometer-notification-agent" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.085253 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="6455a4c1-e3c5-4852-ae89-11be1b0c3b98" containerName="ceilometer-central-agent" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.086654 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="6455a4c1-e3c5-4852-ae89-11be1b0c3b98" containerName="proxy-httpd" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.086730 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="6455a4c1-e3c5-4852-ae89-11be1b0c3b98" containerName="ceilometer-notification-agent" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.086816 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="6455a4c1-e3c5-4852-ae89-11be1b0c3b98" containerName="sg-core" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.095652 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.101891 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.102358 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.102841 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.117813 4811 scope.go:117] "RemoveContainer" containerID="9118c4e25ef4eab041c586a685c361d49ea3a31df89267c418da02f512411d58" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.118033 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.244248 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd0891f5-d119-48e2-8e7e-1ddfa51417a9-run-httpd\") pod \"ceilometer-0\" (UID: \"cd0891f5-d119-48e2-8e7e-1ddfa51417a9\") " pod="openstack/ceilometer-0" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.244297 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd0891f5-d119-48e2-8e7e-1ddfa51417a9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cd0891f5-d119-48e2-8e7e-1ddfa51417a9\") " pod="openstack/ceilometer-0" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.244319 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd0891f5-d119-48e2-8e7e-1ddfa51417a9-scripts\") pod \"ceilometer-0\" (UID: \"cd0891f5-d119-48e2-8e7e-1ddfa51417a9\") " pod="openstack/ceilometer-0" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.244338 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd0891f5-d119-48e2-8e7e-1ddfa51417a9-log-httpd\") pod \"ceilometer-0\" (UID: \"cd0891f5-d119-48e2-8e7e-1ddfa51417a9\") " pod="openstack/ceilometer-0" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.244357 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd0891f5-d119-48e2-8e7e-1ddfa51417a9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cd0891f5-d119-48e2-8e7e-1ddfa51417a9\") " pod="openstack/ceilometer-0" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.244572 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd0891f5-d119-48e2-8e7e-1ddfa51417a9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cd0891f5-d119-48e2-8e7e-1ddfa51417a9\") " pod="openstack/ceilometer-0" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.244601 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfc46\" (UniqueName: \"kubernetes.io/projected/cd0891f5-d119-48e2-8e7e-1ddfa51417a9-kube-api-access-nfc46\") pod \"ceilometer-0\" (UID: \"cd0891f5-d119-48e2-8e7e-1ddfa51417a9\") " pod="openstack/ceilometer-0" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.244628 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd0891f5-d119-48e2-8e7e-1ddfa51417a9-config-data\") pod \"ceilometer-0\" (UID: \"cd0891f5-d119-48e2-8e7e-1ddfa51417a9\") " pod="openstack/ceilometer-0" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.345750 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd0891f5-d119-48e2-8e7e-1ddfa51417a9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cd0891f5-d119-48e2-8e7e-1ddfa51417a9\") " pod="openstack/ceilometer-0" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.345788 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfc46\" (UniqueName: \"kubernetes.io/projected/cd0891f5-d119-48e2-8e7e-1ddfa51417a9-kube-api-access-nfc46\") pod \"ceilometer-0\" (UID: \"cd0891f5-d119-48e2-8e7e-1ddfa51417a9\") " pod="openstack/ceilometer-0" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.345813 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd0891f5-d119-48e2-8e7e-1ddfa51417a9-config-data\") pod \"ceilometer-0\" (UID: \"cd0891f5-d119-48e2-8e7e-1ddfa51417a9\") " pod="openstack/ceilometer-0" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.345860 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd0891f5-d119-48e2-8e7e-1ddfa51417a9-run-httpd\") pod \"ceilometer-0\" (UID: \"cd0891f5-d119-48e2-8e7e-1ddfa51417a9\") " pod="openstack/ceilometer-0" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.345878 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd0891f5-d119-48e2-8e7e-1ddfa51417a9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cd0891f5-d119-48e2-8e7e-1ddfa51417a9\") " pod="openstack/ceilometer-0" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.345896 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd0891f5-d119-48e2-8e7e-1ddfa51417a9-scripts\") pod \"ceilometer-0\" (UID: \"cd0891f5-d119-48e2-8e7e-1ddfa51417a9\") " pod="openstack/ceilometer-0" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.345911 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd0891f5-d119-48e2-8e7e-1ddfa51417a9-log-httpd\") pod \"ceilometer-0\" (UID: \"cd0891f5-d119-48e2-8e7e-1ddfa51417a9\") " pod="openstack/ceilometer-0" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.345930 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd0891f5-d119-48e2-8e7e-1ddfa51417a9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cd0891f5-d119-48e2-8e7e-1ddfa51417a9\") " pod="openstack/ceilometer-0" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.349910 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd0891f5-d119-48e2-8e7e-1ddfa51417a9-run-httpd\") pod \"ceilometer-0\" (UID: \"cd0891f5-d119-48e2-8e7e-1ddfa51417a9\") " pod="openstack/ceilometer-0" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.350298 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd0891f5-d119-48e2-8e7e-1ddfa51417a9-log-httpd\") pod \"ceilometer-0\" (UID: \"cd0891f5-d119-48e2-8e7e-1ddfa51417a9\") " pod="openstack/ceilometer-0" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.354767 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd0891f5-d119-48e2-8e7e-1ddfa51417a9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cd0891f5-d119-48e2-8e7e-1ddfa51417a9\") " pod="openstack/ceilometer-0" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.355250 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd0891f5-d119-48e2-8e7e-1ddfa51417a9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cd0891f5-d119-48e2-8e7e-1ddfa51417a9\") " pod="openstack/ceilometer-0" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.360325 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd0891f5-d119-48e2-8e7e-1ddfa51417a9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cd0891f5-d119-48e2-8e7e-1ddfa51417a9\") " pod="openstack/ceilometer-0" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.364414 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd0891f5-d119-48e2-8e7e-1ddfa51417a9-config-data\") pod \"ceilometer-0\" (UID: \"cd0891f5-d119-48e2-8e7e-1ddfa51417a9\") " pod="openstack/ceilometer-0" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.365471 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd0891f5-d119-48e2-8e7e-1ddfa51417a9-scripts\") pod \"ceilometer-0\" (UID: \"cd0891f5-d119-48e2-8e7e-1ddfa51417a9\") " pod="openstack/ceilometer-0" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.375436 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfc46\" (UniqueName: \"kubernetes.io/projected/cd0891f5-d119-48e2-8e7e-1ddfa51417a9-kube-api-access-nfc46\") pod \"ceilometer-0\" (UID: \"cd0891f5-d119-48e2-8e7e-1ddfa51417a9\") " pod="openstack/ceilometer-0" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.420027 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.608581 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6455a4c1-e3c5-4852-ae89-11be1b0c3b98" path="/var/lib/kubelet/pods/6455a4c1-e3c5-4852-ae89-11be1b0c3b98/volumes" Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.920024 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 10:30:14 crc kubenswrapper[4811]: W0219 10:30:14.922383 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd0891f5_d119_48e2_8e7e_1ddfa51417a9.slice/crio-dda8e74b64ea9ebdcbfada5e541c2138d68783381340fbdd50fd90c78dd5af59 WatchSource:0}: Error finding container dda8e74b64ea9ebdcbfada5e541c2138d68783381340fbdd50fd90c78dd5af59: Status 404 returned error can't find the container with id dda8e74b64ea9ebdcbfada5e541c2138d68783381340fbdd50fd90c78dd5af59 Feb 19 10:30:14 crc kubenswrapper[4811]: I0219 10:30:14.925321 4811 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 10:30:15 crc kubenswrapper[4811]: I0219 10:30:15.033746 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd0891f5-d119-48e2-8e7e-1ddfa51417a9","Type":"ContainerStarted","Data":"dda8e74b64ea9ebdcbfada5e541c2138d68783381340fbdd50fd90c78dd5af59"} Feb 19 10:30:15 crc kubenswrapper[4811]: I0219 10:30:15.729791 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:30:15 crc kubenswrapper[4811]: I0219 10:30:15.780518 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50acbe06-313b-43f2-a4ee-97de3f8778de-config-data\") pod \"50acbe06-313b-43f2-a4ee-97de3f8778de\" (UID: \"50acbe06-313b-43f2-a4ee-97de3f8778de\") " Feb 19 10:30:15 crc kubenswrapper[4811]: I0219 10:30:15.780649 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50acbe06-313b-43f2-a4ee-97de3f8778de-logs\") pod \"50acbe06-313b-43f2-a4ee-97de3f8778de\" (UID: \"50acbe06-313b-43f2-a4ee-97de3f8778de\") " Feb 19 10:30:15 crc kubenswrapper[4811]: I0219 10:30:15.780768 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjkfp\" (UniqueName: \"kubernetes.io/projected/50acbe06-313b-43f2-a4ee-97de3f8778de-kube-api-access-sjkfp\") pod \"50acbe06-313b-43f2-a4ee-97de3f8778de\" (UID: \"50acbe06-313b-43f2-a4ee-97de3f8778de\") " Feb 19 10:30:15 crc kubenswrapper[4811]: I0219 10:30:15.780804 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50acbe06-313b-43f2-a4ee-97de3f8778de-combined-ca-bundle\") pod \"50acbe06-313b-43f2-a4ee-97de3f8778de\" (UID: \"50acbe06-313b-43f2-a4ee-97de3f8778de\") " Feb 19 10:30:15 crc kubenswrapper[4811]: I0219 10:30:15.781235 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50acbe06-313b-43f2-a4ee-97de3f8778de-logs" (OuterVolumeSpecName: "logs") pod "50acbe06-313b-43f2-a4ee-97de3f8778de" (UID: "50acbe06-313b-43f2-a4ee-97de3f8778de"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:30:15 crc kubenswrapper[4811]: I0219 10:30:15.786980 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50acbe06-313b-43f2-a4ee-97de3f8778de-kube-api-access-sjkfp" (OuterVolumeSpecName: "kube-api-access-sjkfp") pod "50acbe06-313b-43f2-a4ee-97de3f8778de" (UID: "50acbe06-313b-43f2-a4ee-97de3f8778de"). InnerVolumeSpecName "kube-api-access-sjkfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:30:15 crc kubenswrapper[4811]: I0219 10:30:15.825919 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50acbe06-313b-43f2-a4ee-97de3f8778de-config-data" (OuterVolumeSpecName: "config-data") pod "50acbe06-313b-43f2-a4ee-97de3f8778de" (UID: "50acbe06-313b-43f2-a4ee-97de3f8778de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:30:15 crc kubenswrapper[4811]: I0219 10:30:15.832482 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50acbe06-313b-43f2-a4ee-97de3f8778de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50acbe06-313b-43f2-a4ee-97de3f8778de" (UID: "50acbe06-313b-43f2-a4ee-97de3f8778de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:30:15 crc kubenswrapper[4811]: I0219 10:30:15.883648 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjkfp\" (UniqueName: \"kubernetes.io/projected/50acbe06-313b-43f2-a4ee-97de3f8778de-kube-api-access-sjkfp\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:15 crc kubenswrapper[4811]: I0219 10:30:15.883696 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50acbe06-313b-43f2-a4ee-97de3f8778de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:15 crc kubenswrapper[4811]: I0219 10:30:15.883710 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50acbe06-313b-43f2-a4ee-97de3f8778de-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:15 crc kubenswrapper[4811]: I0219 10:30:15.883726 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50acbe06-313b-43f2-a4ee-97de3f8778de-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.046484 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd0891f5-d119-48e2-8e7e-1ddfa51417a9","Type":"ContainerStarted","Data":"1cf4d44c94811048c0baf8ea14ae448c48d999836725dba41232145121b781ee"} Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.048081 4811 generic.go:334] "Generic (PLEG): container finished" podID="50acbe06-313b-43f2-a4ee-97de3f8778de" containerID="71775cbab2af3b98e88acaa4c86a86a0b027cd2a90155abee206d8ef73e4dd45" exitCode=0 Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.048104 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"50acbe06-313b-43f2-a4ee-97de3f8778de","Type":"ContainerDied","Data":"71775cbab2af3b98e88acaa4c86a86a0b027cd2a90155abee206d8ef73e4dd45"} Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.048119 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"50acbe06-313b-43f2-a4ee-97de3f8778de","Type":"ContainerDied","Data":"d763bd6d738c976653e675d1a85c454914820a1810f6a334aaf0450cc2740c45"} Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.048135 4811 scope.go:117] "RemoveContainer" containerID="71775cbab2af3b98e88acaa4c86a86a0b027cd2a90155abee206d8ef73e4dd45" Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.048245 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.074426 4811 scope.go:117] "RemoveContainer" containerID="07d73aab320176f070e630c7916cd6a4c29486d0d61c09ce11c36ca7d4e07f04" Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.099588 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.111578 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.117632 4811 scope.go:117] "RemoveContainer" containerID="71775cbab2af3b98e88acaa4c86a86a0b027cd2a90155abee206d8ef73e4dd45" Feb 19 10:30:16 crc kubenswrapper[4811]: E0219 10:30:16.119030 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71775cbab2af3b98e88acaa4c86a86a0b027cd2a90155abee206d8ef73e4dd45\": container with ID starting with 71775cbab2af3b98e88acaa4c86a86a0b027cd2a90155abee206d8ef73e4dd45 not found: ID does not exist" containerID="71775cbab2af3b98e88acaa4c86a86a0b027cd2a90155abee206d8ef73e4dd45" Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.119150 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71775cbab2af3b98e88acaa4c86a86a0b027cd2a90155abee206d8ef73e4dd45"} err="failed to get container status \"71775cbab2af3b98e88acaa4c86a86a0b027cd2a90155abee206d8ef73e4dd45\": rpc error: code = NotFound desc = could not find container \"71775cbab2af3b98e88acaa4c86a86a0b027cd2a90155abee206d8ef73e4dd45\": container with ID starting with 71775cbab2af3b98e88acaa4c86a86a0b027cd2a90155abee206d8ef73e4dd45 not found: ID does not exist" Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.119258 4811 scope.go:117] "RemoveContainer" containerID="07d73aab320176f070e630c7916cd6a4c29486d0d61c09ce11c36ca7d4e07f04" Feb 19 10:30:16 crc kubenswrapper[4811]: E0219 10:30:16.119730 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07d73aab320176f070e630c7916cd6a4c29486d0d61c09ce11c36ca7d4e07f04\": container with ID starting with 07d73aab320176f070e630c7916cd6a4c29486d0d61c09ce11c36ca7d4e07f04 not found: ID does not exist" containerID="07d73aab320176f070e630c7916cd6a4c29486d0d61c09ce11c36ca7d4e07f04" Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.119762 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07d73aab320176f070e630c7916cd6a4c29486d0d61c09ce11c36ca7d4e07f04"} err="failed to get container status \"07d73aab320176f070e630c7916cd6a4c29486d0d61c09ce11c36ca7d4e07f04\": rpc error: code = NotFound desc = could not find container \"07d73aab320176f070e630c7916cd6a4c29486d0d61c09ce11c36ca7d4e07f04\": container with ID starting with 07d73aab320176f070e630c7916cd6a4c29486d0d61c09ce11c36ca7d4e07f04 not found: ID does not exist" Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.124389 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 10:30:16 crc kubenswrapper[4811]: E0219 10:30:16.125184 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50acbe06-313b-43f2-a4ee-97de3f8778de" containerName="nova-api-api" Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.125219 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="50acbe06-313b-43f2-a4ee-97de3f8778de" containerName="nova-api-api" Feb 19 10:30:16 crc kubenswrapper[4811]: E0219 10:30:16.125257 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50acbe06-313b-43f2-a4ee-97de3f8778de" containerName="nova-api-log" Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.125267 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="50acbe06-313b-43f2-a4ee-97de3f8778de" containerName="nova-api-log" Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.125544 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="50acbe06-313b-43f2-a4ee-97de3f8778de" containerName="nova-api-log" Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.125574 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="50acbe06-313b-43f2-a4ee-97de3f8778de" containerName="nova-api-api" Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.126921 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.128998 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.130095 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.130672 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.157068 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.192058 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx468\" (UniqueName: \"kubernetes.io/projected/0fe9de3f-2c70-4bd8-aadf-410c3eb35259-kube-api-access-xx468\") pod \"nova-api-0\" (UID: \"0fe9de3f-2c70-4bd8-aadf-410c3eb35259\") " pod="openstack/nova-api-0" Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.192204 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fe9de3f-2c70-4bd8-aadf-410c3eb35259-public-tls-certs\") pod \"nova-api-0\" (UID: \"0fe9de3f-2c70-4bd8-aadf-410c3eb35259\") " pod="openstack/nova-api-0" Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.192254 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fe9de3f-2c70-4bd8-aadf-410c3eb35259-logs\") pod \"nova-api-0\" (UID: \"0fe9de3f-2c70-4bd8-aadf-410c3eb35259\") " pod="openstack/nova-api-0" Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.192302 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fe9de3f-2c70-4bd8-aadf-410c3eb35259-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0fe9de3f-2c70-4bd8-aadf-410c3eb35259\") " pod="openstack/nova-api-0" Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.192439 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fe9de3f-2c70-4bd8-aadf-410c3eb35259-config-data\") pod \"nova-api-0\" (UID: \"0fe9de3f-2c70-4bd8-aadf-410c3eb35259\") " pod="openstack/nova-api-0" Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.192517 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fe9de3f-2c70-4bd8-aadf-410c3eb35259-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0fe9de3f-2c70-4bd8-aadf-410c3eb35259\") " pod="openstack/nova-api-0" Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.289206 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.293663 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fe9de3f-2c70-4bd8-aadf-410c3eb35259-config-data\") pod \"nova-api-0\" (UID: \"0fe9de3f-2c70-4bd8-aadf-410c3eb35259\") " pod="openstack/nova-api-0" Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.293712 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fe9de3f-2c70-4bd8-aadf-410c3eb35259-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0fe9de3f-2c70-4bd8-aadf-410c3eb35259\") " pod="openstack/nova-api-0" Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.293752 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx468\" (UniqueName: \"kubernetes.io/projected/0fe9de3f-2c70-4bd8-aadf-410c3eb35259-kube-api-access-xx468\") pod \"nova-api-0\" (UID: \"0fe9de3f-2c70-4bd8-aadf-410c3eb35259\") " pod="openstack/nova-api-0" Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.293819 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fe9de3f-2c70-4bd8-aadf-410c3eb35259-public-tls-certs\") pod \"nova-api-0\" (UID: \"0fe9de3f-2c70-4bd8-aadf-410c3eb35259\") " pod="openstack/nova-api-0" Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.293856 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fe9de3f-2c70-4bd8-aadf-410c3eb35259-logs\") pod \"nova-api-0\" (UID: \"0fe9de3f-2c70-4bd8-aadf-410c3eb35259\") " pod="openstack/nova-api-0" Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.293898 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fe9de3f-2c70-4bd8-aadf-410c3eb35259-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0fe9de3f-2c70-4bd8-aadf-410c3eb35259\") " pod="openstack/nova-api-0" Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.294794 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fe9de3f-2c70-4bd8-aadf-410c3eb35259-logs\") pod \"nova-api-0\" (UID: \"0fe9de3f-2c70-4bd8-aadf-410c3eb35259\") " pod="openstack/nova-api-0" Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.299200 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fe9de3f-2c70-4bd8-aadf-410c3eb35259-public-tls-certs\") pod \"nova-api-0\" (UID: \"0fe9de3f-2c70-4bd8-aadf-410c3eb35259\") " pod="openstack/nova-api-0" Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.299346 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fe9de3f-2c70-4bd8-aadf-410c3eb35259-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0fe9de3f-2c70-4bd8-aadf-410c3eb35259\") " pod="openstack/nova-api-0" Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.300525 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fe9de3f-2c70-4bd8-aadf-410c3eb35259-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0fe9de3f-2c70-4bd8-aadf-410c3eb35259\") " pod="openstack/nova-api-0" Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.306865 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fe9de3f-2c70-4bd8-aadf-410c3eb35259-config-data\") pod \"nova-api-0\" (UID: \"0fe9de3f-2c70-4bd8-aadf-410c3eb35259\") " pod="openstack/nova-api-0" Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.311798 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx468\" (UniqueName: \"kubernetes.io/projected/0fe9de3f-2c70-4bd8-aadf-410c3eb35259-kube-api-access-xx468\") pod \"nova-api-0\" (UID: \"0fe9de3f-2c70-4bd8-aadf-410c3eb35259\") " pod="openstack/nova-api-0" Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.315082 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.456569 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:30:16 crc kubenswrapper[4811]: I0219 10:30:16.624083 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50acbe06-313b-43f2-a4ee-97de3f8778de" path="/var/lib/kubelet/pods/50acbe06-313b-43f2-a4ee-97de3f8778de/volumes" Feb 19 10:30:17 crc kubenswrapper[4811]: I0219 10:30:17.036365 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:30:17 crc kubenswrapper[4811]: I0219 10:30:17.065134 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd0891f5-d119-48e2-8e7e-1ddfa51417a9","Type":"ContainerStarted","Data":"6be3a641f5b3ea14fc33d5978a478d5a15102c79bb885cc8dae02122b36375ae"} Feb 19 10:30:17 crc kubenswrapper[4811]: I0219 10:30:17.067380 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0fe9de3f-2c70-4bd8-aadf-410c3eb35259","Type":"ContainerStarted","Data":"b10b4ec9e00945a4221400670a9ee9aeba314ccf7353961be09c4dade6c59c14"} Feb 19 10:30:17 crc kubenswrapper[4811]: I0219 10:30:17.092516 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 19 10:30:17 crc kubenswrapper[4811]: I0219 10:30:17.298107 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-654xk"] Feb 19 10:30:17 crc kubenswrapper[4811]: I0219 10:30:17.299703 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-654xk" Feb 19 10:30:17 crc kubenswrapper[4811]: I0219 10:30:17.302552 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 19 10:30:17 crc kubenswrapper[4811]: I0219 10:30:17.305188 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 19 10:30:17 crc kubenswrapper[4811]: I0219 10:30:17.317932 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-654xk"] Feb 19 10:30:17 crc kubenswrapper[4811]: I0219 10:30:17.323249 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b0af941-4e43-49e9-903b-438680c89e3c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-654xk\" (UID: \"1b0af941-4e43-49e9-903b-438680c89e3c\") " pod="openstack/nova-cell1-cell-mapping-654xk" Feb 19 10:30:17 crc kubenswrapper[4811]: I0219 10:30:17.323430 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b0af941-4e43-49e9-903b-438680c89e3c-scripts\") pod \"nova-cell1-cell-mapping-654xk\" (UID: \"1b0af941-4e43-49e9-903b-438680c89e3c\") " pod="openstack/nova-cell1-cell-mapping-654xk" Feb 19 10:30:17 crc kubenswrapper[4811]: I0219 10:30:17.323487 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b0af941-4e43-49e9-903b-438680c89e3c-config-data\") pod \"nova-cell1-cell-mapping-654xk\" (UID: \"1b0af941-4e43-49e9-903b-438680c89e3c\") " pod="openstack/nova-cell1-cell-mapping-654xk" Feb 19 10:30:17 crc kubenswrapper[4811]: I0219 10:30:17.323549 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hftgd\" (UniqueName: \"kubernetes.io/projected/1b0af941-4e43-49e9-903b-438680c89e3c-kube-api-access-hftgd\") pod \"nova-cell1-cell-mapping-654xk\" (UID: \"1b0af941-4e43-49e9-903b-438680c89e3c\") " pod="openstack/nova-cell1-cell-mapping-654xk" Feb 19 10:30:17 crc kubenswrapper[4811]: I0219 10:30:17.425239 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b0af941-4e43-49e9-903b-438680c89e3c-scripts\") pod \"nova-cell1-cell-mapping-654xk\" (UID: \"1b0af941-4e43-49e9-903b-438680c89e3c\") " pod="openstack/nova-cell1-cell-mapping-654xk" Feb 19 10:30:17 crc kubenswrapper[4811]: I0219 10:30:17.425308 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b0af941-4e43-49e9-903b-438680c89e3c-config-data\") pod \"nova-cell1-cell-mapping-654xk\" (UID: \"1b0af941-4e43-49e9-903b-438680c89e3c\") " pod="openstack/nova-cell1-cell-mapping-654xk" Feb 19 10:30:17 crc kubenswrapper[4811]: I0219 10:30:17.425380 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hftgd\" (UniqueName: \"kubernetes.io/projected/1b0af941-4e43-49e9-903b-438680c89e3c-kube-api-access-hftgd\") pod \"nova-cell1-cell-mapping-654xk\" (UID: \"1b0af941-4e43-49e9-903b-438680c89e3c\") " pod="openstack/nova-cell1-cell-mapping-654xk" Feb 19 10:30:17 crc kubenswrapper[4811]: I0219 10:30:17.425523 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b0af941-4e43-49e9-903b-438680c89e3c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-654xk\" (UID: \"1b0af941-4e43-49e9-903b-438680c89e3c\") " pod="openstack/nova-cell1-cell-mapping-654xk" Feb 19 10:30:17 crc kubenswrapper[4811]: I0219 10:30:17.431757 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b0af941-4e43-49e9-903b-438680c89e3c-config-data\") pod \"nova-cell1-cell-mapping-654xk\" (UID: \"1b0af941-4e43-49e9-903b-438680c89e3c\") " pod="openstack/nova-cell1-cell-mapping-654xk" Feb 19 10:30:17 crc kubenswrapper[4811]: I0219 10:30:17.432557 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b0af941-4e43-49e9-903b-438680c89e3c-scripts\") pod \"nova-cell1-cell-mapping-654xk\" (UID: \"1b0af941-4e43-49e9-903b-438680c89e3c\") " pod="openstack/nova-cell1-cell-mapping-654xk" Feb 19 10:30:17 crc kubenswrapper[4811]: I0219 10:30:17.437047 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b0af941-4e43-49e9-903b-438680c89e3c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-654xk\" (UID: \"1b0af941-4e43-49e9-903b-438680c89e3c\") " pod="openstack/nova-cell1-cell-mapping-654xk" Feb 19 10:30:17 crc kubenswrapper[4811]: I0219 10:30:17.447941 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hftgd\" (UniqueName: \"kubernetes.io/projected/1b0af941-4e43-49e9-903b-438680c89e3c-kube-api-access-hftgd\") pod \"nova-cell1-cell-mapping-654xk\" (UID: \"1b0af941-4e43-49e9-903b-438680c89e3c\") " pod="openstack/nova-cell1-cell-mapping-654xk" Feb 19 10:30:17 crc kubenswrapper[4811]: I0219 10:30:17.633325 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-654xk" Feb 19 10:30:18 crc kubenswrapper[4811]: I0219 10:30:18.083280 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd0891f5-d119-48e2-8e7e-1ddfa51417a9","Type":"ContainerStarted","Data":"b895020992a882d4e366f4b9d1f77ad2b76af795fdc0356adaa936f3143c4f78"} Feb 19 10:30:18 crc kubenswrapper[4811]: I0219 10:30:18.086063 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0fe9de3f-2c70-4bd8-aadf-410c3eb35259","Type":"ContainerStarted","Data":"08d2c16cbf05bf6e9187c159e8e1d7f07c02b1f93dade620015e703a7f49fb5e"} Feb 19 10:30:18 crc kubenswrapper[4811]: I0219 10:30:18.086093 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0fe9de3f-2c70-4bd8-aadf-410c3eb35259","Type":"ContainerStarted","Data":"d0dbd2dee0f91b92367736c350abfcff49c7b2182d09e663949412502da17e26"} Feb 19 10:30:18 crc kubenswrapper[4811]: I0219 10:30:18.113257 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.113229388 podStartE2EDuration="2.113229388s" podCreationTimestamp="2026-02-19 10:30:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:30:18.109966955 +0000 UTC m=+1306.636431641" watchObservedRunningTime="2026-02-19 10:30:18.113229388 +0000 UTC m=+1306.639694074" Feb 19 10:30:18 crc kubenswrapper[4811]: I0219 10:30:18.181055 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-654xk"] Feb 19 10:30:19 crc kubenswrapper[4811]: I0219 10:30:19.101193 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-654xk" event={"ID":"1b0af941-4e43-49e9-903b-438680c89e3c","Type":"ContainerStarted","Data":"797124afab4855364d9a11b8d2b68ce1c0f981de88f73d7cf60c3e68bd4e8b0f"} Feb 19 10:30:19 crc kubenswrapper[4811]: I0219 10:30:19.101890 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-654xk" event={"ID":"1b0af941-4e43-49e9-903b-438680c89e3c","Type":"ContainerStarted","Data":"09ab49a2ddd84793d3277d1e3c66351418be651776a2f3b34dca3933ec30d772"} Feb 19 10:30:19 crc kubenswrapper[4811]: I0219 10:30:19.127210 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-654xk" podStartSLOduration=2.1271839 podStartE2EDuration="2.1271839s" podCreationTimestamp="2026-02-19 10:30:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:30:19.118740094 +0000 UTC m=+1307.645204780" watchObservedRunningTime="2026-02-19 10:30:19.1271839 +0000 UTC m=+1307.653648596" Feb 19 10:30:19 crc kubenswrapper[4811]: I0219 10:30:19.441668 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-rfmsw" Feb 19 10:30:19 crc kubenswrapper[4811]: I0219 10:30:19.563192 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-zm5wz"] Feb 19 10:30:19 crc kubenswrapper[4811]: I0219 10:30:19.573076 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-zm5wz" podUID="140c530d-9fec-4354-bada-2733c806bf1b" containerName="dnsmasq-dns" containerID="cri-o://c434de7bc4106edc53948606a9d2518b1e48f27659ac8b1f7431cf71e81a5ef9" gracePeriod=10 Feb 19 10:30:20 crc kubenswrapper[4811]: I0219 10:30:20.083923 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-zm5wz" Feb 19 10:30:20 crc kubenswrapper[4811]: I0219 10:30:20.112162 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd0891f5-d119-48e2-8e7e-1ddfa51417a9","Type":"ContainerStarted","Data":"ae5a846fc959cb5cdcf9856117e554b987ca44f2e424344f5a282f1ee46999bc"} Feb 19 10:30:20 crc kubenswrapper[4811]: I0219 10:30:20.113502 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 10:30:20 crc kubenswrapper[4811]: I0219 10:30:20.115995 4811 generic.go:334] "Generic (PLEG): container finished" podID="140c530d-9fec-4354-bada-2733c806bf1b" containerID="c434de7bc4106edc53948606a9d2518b1e48f27659ac8b1f7431cf71e81a5ef9" exitCode=0 Feb 19 10:30:20 crc kubenswrapper[4811]: I0219 10:30:20.116234 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-zm5wz" Feb 19 10:30:20 crc kubenswrapper[4811]: I0219 10:30:20.116715 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-zm5wz" event={"ID":"140c530d-9fec-4354-bada-2733c806bf1b","Type":"ContainerDied","Data":"c434de7bc4106edc53948606a9d2518b1e48f27659ac8b1f7431cf71e81a5ef9"} Feb 19 10:30:20 crc kubenswrapper[4811]: I0219 10:30:20.116758 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-zm5wz" event={"ID":"140c530d-9fec-4354-bada-2733c806bf1b","Type":"ContainerDied","Data":"0851ad276b4baa5d8532f207d396a37040371baf97cafb2aa8c5e028977af0a6"} Feb 19 10:30:20 crc kubenswrapper[4811]: I0219 10:30:20.116782 4811 scope.go:117] "RemoveContainer" containerID="c434de7bc4106edc53948606a9d2518b1e48f27659ac8b1f7431cf71e81a5ef9" Feb 19 10:30:20 crc kubenswrapper[4811]: I0219 10:30:20.141711 4811 scope.go:117] "RemoveContainer" containerID="9108a83735f14a64e07149e9a7f591657172199cca06768893c6d34b737c5fa4" Feb 19 10:30:20 crc kubenswrapper[4811]: I0219 10:30:20.150681 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.6599377990000002 podStartE2EDuration="6.150643383s" podCreationTimestamp="2026-02-19 10:30:14 +0000 UTC" firstStartedPulling="2026-02-19 10:30:14.925058707 +0000 UTC m=+1303.451523393" lastFinishedPulling="2026-02-19 10:30:19.415764291 +0000 UTC m=+1307.942228977" observedRunningTime="2026-02-19 10:30:20.145075561 +0000 UTC m=+1308.671540247" watchObservedRunningTime="2026-02-19 10:30:20.150643383 +0000 UTC m=+1308.677108069" Feb 19 10:30:20 crc kubenswrapper[4811]: I0219 10:30:20.171845 4811 scope.go:117] "RemoveContainer" containerID="c434de7bc4106edc53948606a9d2518b1e48f27659ac8b1f7431cf71e81a5ef9" Feb 19 10:30:20 crc kubenswrapper[4811]: E0219 10:30:20.174703 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c434de7bc4106edc53948606a9d2518b1e48f27659ac8b1f7431cf71e81a5ef9\": container with ID starting with c434de7bc4106edc53948606a9d2518b1e48f27659ac8b1f7431cf71e81a5ef9 not found: ID does not exist" containerID="c434de7bc4106edc53948606a9d2518b1e48f27659ac8b1f7431cf71e81a5ef9" Feb 19 10:30:20 crc kubenswrapper[4811]: I0219 10:30:20.174773 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c434de7bc4106edc53948606a9d2518b1e48f27659ac8b1f7431cf71e81a5ef9"} err="failed to get container status \"c434de7bc4106edc53948606a9d2518b1e48f27659ac8b1f7431cf71e81a5ef9\": rpc error: code = NotFound desc = could not find container \"c434de7bc4106edc53948606a9d2518b1e48f27659ac8b1f7431cf71e81a5ef9\": container with ID starting with c434de7bc4106edc53948606a9d2518b1e48f27659ac8b1f7431cf71e81a5ef9 not found: ID does not exist" Feb 19 10:30:20 crc kubenswrapper[4811]: I0219 10:30:20.174921 4811 scope.go:117] "RemoveContainer" containerID="9108a83735f14a64e07149e9a7f591657172199cca06768893c6d34b737c5fa4" Feb 19 10:30:20 crc kubenswrapper[4811]: E0219 10:30:20.175340 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9108a83735f14a64e07149e9a7f591657172199cca06768893c6d34b737c5fa4\": container with ID starting with 9108a83735f14a64e07149e9a7f591657172199cca06768893c6d34b737c5fa4 not found: ID does not exist" containerID="9108a83735f14a64e07149e9a7f591657172199cca06768893c6d34b737c5fa4" Feb 19 10:30:20 crc kubenswrapper[4811]: I0219 10:30:20.175375 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9108a83735f14a64e07149e9a7f591657172199cca06768893c6d34b737c5fa4"} err="failed to get container status \"9108a83735f14a64e07149e9a7f591657172199cca06768893c6d34b737c5fa4\": rpc error: code = NotFound desc = could not find container \"9108a83735f14a64e07149e9a7f591657172199cca06768893c6d34b737c5fa4\": container with ID starting with 9108a83735f14a64e07149e9a7f591657172199cca06768893c6d34b737c5fa4 not found: ID does not exist" Feb 19 10:30:20 crc kubenswrapper[4811]: I0219 10:30:20.184637 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/140c530d-9fec-4354-bada-2733c806bf1b-ovsdbserver-nb\") pod \"140c530d-9fec-4354-bada-2733c806bf1b\" (UID: \"140c530d-9fec-4354-bada-2733c806bf1b\") " Feb 19 10:30:20 crc kubenswrapper[4811]: I0219 10:30:20.184678 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/140c530d-9fec-4354-bada-2733c806bf1b-config\") pod \"140c530d-9fec-4354-bada-2733c806bf1b\" (UID: \"140c530d-9fec-4354-bada-2733c806bf1b\") " Feb 19 10:30:20 crc kubenswrapper[4811]: I0219 10:30:20.184818 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/140c530d-9fec-4354-bada-2733c806bf1b-ovsdbserver-sb\") pod \"140c530d-9fec-4354-bada-2733c806bf1b\" (UID: \"140c530d-9fec-4354-bada-2733c806bf1b\") " Feb 19 10:30:20 crc kubenswrapper[4811]: I0219 10:30:20.184858 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/140c530d-9fec-4354-bada-2733c806bf1b-dns-swift-storage-0\") pod \"140c530d-9fec-4354-bada-2733c806bf1b\" (UID: \"140c530d-9fec-4354-bada-2733c806bf1b\") " Feb 19 10:30:20 crc kubenswrapper[4811]: I0219 10:30:20.184889 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25fpm\" (UniqueName: \"kubernetes.io/projected/140c530d-9fec-4354-bada-2733c806bf1b-kube-api-access-25fpm\") pod \"140c530d-9fec-4354-bada-2733c806bf1b\" (UID: \"140c530d-9fec-4354-bada-2733c806bf1b\") " Feb 19 10:30:20 crc kubenswrapper[4811]: I0219 10:30:20.184921 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/140c530d-9fec-4354-bada-2733c806bf1b-dns-svc\") pod \"140c530d-9fec-4354-bada-2733c806bf1b\" (UID: \"140c530d-9fec-4354-bada-2733c806bf1b\") " Feb 19 10:30:20 crc kubenswrapper[4811]: I0219 10:30:20.192185 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/140c530d-9fec-4354-bada-2733c806bf1b-kube-api-access-25fpm" (OuterVolumeSpecName: "kube-api-access-25fpm") pod "140c530d-9fec-4354-bada-2733c806bf1b" (UID: "140c530d-9fec-4354-bada-2733c806bf1b"). InnerVolumeSpecName "kube-api-access-25fpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:30:20 crc kubenswrapper[4811]: I0219 10:30:20.246146 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/140c530d-9fec-4354-bada-2733c806bf1b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "140c530d-9fec-4354-bada-2733c806bf1b" (UID: "140c530d-9fec-4354-bada-2733c806bf1b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:30:20 crc kubenswrapper[4811]: I0219 10:30:20.251351 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/140c530d-9fec-4354-bada-2733c806bf1b-config" (OuterVolumeSpecName: "config") pod "140c530d-9fec-4354-bada-2733c806bf1b" (UID: "140c530d-9fec-4354-bada-2733c806bf1b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:30:20 crc kubenswrapper[4811]: I0219 10:30:20.266865 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/140c530d-9fec-4354-bada-2733c806bf1b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "140c530d-9fec-4354-bada-2733c806bf1b" (UID: "140c530d-9fec-4354-bada-2733c806bf1b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:30:20 crc kubenswrapper[4811]: I0219 10:30:20.268830 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/140c530d-9fec-4354-bada-2733c806bf1b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "140c530d-9fec-4354-bada-2733c806bf1b" (UID: "140c530d-9fec-4354-bada-2733c806bf1b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:30:20 crc kubenswrapper[4811]: I0219 10:30:20.282319 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/140c530d-9fec-4354-bada-2733c806bf1b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "140c530d-9fec-4354-bada-2733c806bf1b" (UID: "140c530d-9fec-4354-bada-2733c806bf1b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:30:20 crc kubenswrapper[4811]: I0219 10:30:20.290513 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/140c530d-9fec-4354-bada-2733c806bf1b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:20 crc kubenswrapper[4811]: I0219 10:30:20.290749 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/140c530d-9fec-4354-bada-2733c806bf1b-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:20 crc kubenswrapper[4811]: I0219 10:30:20.290868 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/140c530d-9fec-4354-bada-2733c806bf1b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:20 crc kubenswrapper[4811]: I0219 10:30:20.290951 4811 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/140c530d-9fec-4354-bada-2733c806bf1b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:20 crc kubenswrapper[4811]: I0219 10:30:20.291029 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25fpm\" (UniqueName: \"kubernetes.io/projected/140c530d-9fec-4354-bada-2733c806bf1b-kube-api-access-25fpm\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:20 crc kubenswrapper[4811]: I0219 10:30:20.291090 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/140c530d-9fec-4354-bada-2733c806bf1b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:20 crc kubenswrapper[4811]: I0219 10:30:20.535721 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-zm5wz"] Feb 19 10:30:20 crc kubenswrapper[4811]: I0219 10:30:20.545567 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-zm5wz"] Feb 19 10:30:20 crc kubenswrapper[4811]: I0219 10:30:20.597275 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="140c530d-9fec-4354-bada-2733c806bf1b" path="/var/lib/kubelet/pods/140c530d-9fec-4354-bada-2733c806bf1b/volumes" Feb 19 10:30:24 crc kubenswrapper[4811]: I0219 10:30:24.161403 4811 generic.go:334] "Generic (PLEG): container finished" podID="1b0af941-4e43-49e9-903b-438680c89e3c" containerID="797124afab4855364d9a11b8d2b68ce1c0f981de88f73d7cf60c3e68bd4e8b0f" exitCode=0 Feb 19 10:30:24 crc kubenswrapper[4811]: I0219 10:30:24.162185 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-654xk" event={"ID":"1b0af941-4e43-49e9-903b-438680c89e3c","Type":"ContainerDied","Data":"797124afab4855364d9a11b8d2b68ce1c0f981de88f73d7cf60c3e68bd4e8b0f"} Feb 19 10:30:24 crc kubenswrapper[4811]: I0219 10:30:24.807802 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-757b4f8459-zm5wz" podUID="140c530d-9fec-4354-bada-2733c806bf1b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.190:5353: i/o timeout" Feb 19 10:30:25 crc kubenswrapper[4811]: I0219 10:30:25.579082 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-654xk" Feb 19 10:30:25 crc kubenswrapper[4811]: I0219 10:30:25.604108 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b0af941-4e43-49e9-903b-438680c89e3c-combined-ca-bundle\") pod \"1b0af941-4e43-49e9-903b-438680c89e3c\" (UID: \"1b0af941-4e43-49e9-903b-438680c89e3c\") " Feb 19 10:30:25 crc kubenswrapper[4811]: I0219 10:30:25.604179 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hftgd\" (UniqueName: \"kubernetes.io/projected/1b0af941-4e43-49e9-903b-438680c89e3c-kube-api-access-hftgd\") pod \"1b0af941-4e43-49e9-903b-438680c89e3c\" (UID: \"1b0af941-4e43-49e9-903b-438680c89e3c\") " Feb 19 10:30:25 crc kubenswrapper[4811]: I0219 10:30:25.604385 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b0af941-4e43-49e9-903b-438680c89e3c-config-data\") pod \"1b0af941-4e43-49e9-903b-438680c89e3c\" (UID: \"1b0af941-4e43-49e9-903b-438680c89e3c\") " Feb 19 10:30:25 crc kubenswrapper[4811]: I0219 10:30:25.604621 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b0af941-4e43-49e9-903b-438680c89e3c-scripts\") pod \"1b0af941-4e43-49e9-903b-438680c89e3c\" (UID: \"1b0af941-4e43-49e9-903b-438680c89e3c\") " Feb 19 10:30:25 crc kubenswrapper[4811]: I0219 10:30:25.613705 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b0af941-4e43-49e9-903b-438680c89e3c-scripts" (OuterVolumeSpecName: "scripts") pod "1b0af941-4e43-49e9-903b-438680c89e3c" (UID: "1b0af941-4e43-49e9-903b-438680c89e3c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:30:25 crc kubenswrapper[4811]: I0219 10:30:25.625194 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b0af941-4e43-49e9-903b-438680c89e3c-kube-api-access-hftgd" (OuterVolumeSpecName: "kube-api-access-hftgd") pod "1b0af941-4e43-49e9-903b-438680c89e3c" (UID: "1b0af941-4e43-49e9-903b-438680c89e3c"). InnerVolumeSpecName "kube-api-access-hftgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:30:25 crc kubenswrapper[4811]: I0219 10:30:25.641641 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b0af941-4e43-49e9-903b-438680c89e3c-config-data" (OuterVolumeSpecName: "config-data") pod "1b0af941-4e43-49e9-903b-438680c89e3c" (UID: "1b0af941-4e43-49e9-903b-438680c89e3c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:30:25 crc kubenswrapper[4811]: I0219 10:30:25.643907 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b0af941-4e43-49e9-903b-438680c89e3c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b0af941-4e43-49e9-903b-438680c89e3c" (UID: "1b0af941-4e43-49e9-903b-438680c89e3c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:30:25 crc kubenswrapper[4811]: I0219 10:30:25.708141 4811 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b0af941-4e43-49e9-903b-438680c89e3c-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:25 crc kubenswrapper[4811]: I0219 10:30:25.708215 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b0af941-4e43-49e9-903b-438680c89e3c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:25 crc kubenswrapper[4811]: I0219 10:30:25.708238 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hftgd\" (UniqueName: \"kubernetes.io/projected/1b0af941-4e43-49e9-903b-438680c89e3c-kube-api-access-hftgd\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:25 crc kubenswrapper[4811]: I0219 10:30:25.708256 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b0af941-4e43-49e9-903b-438680c89e3c-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:26 crc kubenswrapper[4811]: I0219 10:30:26.187922 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-654xk" event={"ID":"1b0af941-4e43-49e9-903b-438680c89e3c","Type":"ContainerDied","Data":"09ab49a2ddd84793d3277d1e3c66351418be651776a2f3b34dca3933ec30d772"} Feb 19 10:30:26 crc kubenswrapper[4811]: I0219 10:30:26.187979 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09ab49a2ddd84793d3277d1e3c66351418be651776a2f3b34dca3933ec30d772" Feb 19 10:30:26 crc kubenswrapper[4811]: I0219 10:30:26.188045 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-654xk" Feb 19 10:30:26 crc kubenswrapper[4811]: I0219 10:30:26.367361 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:30:26 crc kubenswrapper[4811]: I0219 10:30:26.367777 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0fe9de3f-2c70-4bd8-aadf-410c3eb35259" containerName="nova-api-api" containerID="cri-o://08d2c16cbf05bf6e9187c159e8e1d7f07c02b1f93dade620015e703a7f49fb5e" gracePeriod=30 Feb 19 10:30:26 crc kubenswrapper[4811]: I0219 10:30:26.367998 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0fe9de3f-2c70-4bd8-aadf-410c3eb35259" containerName="nova-api-log" containerID="cri-o://d0dbd2dee0f91b92367736c350abfcff49c7b2182d09e663949412502da17e26" gracePeriod=30 Feb 19 10:30:26 crc kubenswrapper[4811]: I0219 10:30:26.380664 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:30:26 crc kubenswrapper[4811]: I0219 10:30:26.380925 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="79d85ecb-0aba-485f-a679-452a09aed5d4" containerName="nova-scheduler-scheduler" containerID="cri-o://2505d5161928a585652b9fbf661b0d4c9d170e5ecf7bf8585256977079bbd621" gracePeriod=30 Feb 19 10:30:26 crc kubenswrapper[4811]: I0219 10:30:26.423133 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:30:26 crc kubenswrapper[4811]: I0219 10:30:26.429108 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a1e765f8-1094-4b4e-a5a2-5dc86e28c08d" containerName="nova-metadata-log" containerID="cri-o://dd32eec941c4e991f298fa078cd0864b1db0f3b062ca4f5520f8c9ed7331e8b0" gracePeriod=30 Feb 19 10:30:26 crc kubenswrapper[4811]: I0219 10:30:26.429282 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a1e765f8-1094-4b4e-a5a2-5dc86e28c08d" containerName="nova-metadata-metadata" containerID="cri-o://b2c9f892450efb4364ca9adcac51196aaa7405233147e8e46cf7fda73960088e" gracePeriod=30 Feb 19 10:30:26 crc kubenswrapper[4811]: I0219 10:30:26.963637 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:30:27 crc kubenswrapper[4811]: E0219 10:30:27.032658 4811 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2505d5161928a585652b9fbf661b0d4c9d170e5ecf7bf8585256977079bbd621" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.036384 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fe9de3f-2c70-4bd8-aadf-410c3eb35259-combined-ca-bundle\") pod \"0fe9de3f-2c70-4bd8-aadf-410c3eb35259\" (UID: \"0fe9de3f-2c70-4bd8-aadf-410c3eb35259\") " Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.036552 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fe9de3f-2c70-4bd8-aadf-410c3eb35259-config-data\") pod \"0fe9de3f-2c70-4bd8-aadf-410c3eb35259\" (UID: \"0fe9de3f-2c70-4bd8-aadf-410c3eb35259\") " Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.036696 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fe9de3f-2c70-4bd8-aadf-410c3eb35259-logs\") pod \"0fe9de3f-2c70-4bd8-aadf-410c3eb35259\" (UID: \"0fe9de3f-2c70-4bd8-aadf-410c3eb35259\") " Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.037005 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fe9de3f-2c70-4bd8-aadf-410c3eb35259-public-tls-certs\") pod \"0fe9de3f-2c70-4bd8-aadf-410c3eb35259\" (UID: \"0fe9de3f-2c70-4bd8-aadf-410c3eb35259\") " Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.037083 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fe9de3f-2c70-4bd8-aadf-410c3eb35259-internal-tls-certs\") pod \"0fe9de3f-2c70-4bd8-aadf-410c3eb35259\" (UID: \"0fe9de3f-2c70-4bd8-aadf-410c3eb35259\") " Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.037086 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fe9de3f-2c70-4bd8-aadf-410c3eb35259-logs" (OuterVolumeSpecName: "logs") pod "0fe9de3f-2c70-4bd8-aadf-410c3eb35259" (UID: "0fe9de3f-2c70-4bd8-aadf-410c3eb35259"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.037144 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xx468\" (UniqueName: \"kubernetes.io/projected/0fe9de3f-2c70-4bd8-aadf-410c3eb35259-kube-api-access-xx468\") pod \"0fe9de3f-2c70-4bd8-aadf-410c3eb35259\" (UID: \"0fe9de3f-2c70-4bd8-aadf-410c3eb35259\") " Feb 19 10:30:27 crc kubenswrapper[4811]: E0219 10:30:27.037576 4811 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2505d5161928a585652b9fbf661b0d4c9d170e5ecf7bf8585256977079bbd621" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.038635 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fe9de3f-2c70-4bd8-aadf-410c3eb35259-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.041587 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fe9de3f-2c70-4bd8-aadf-410c3eb35259-kube-api-access-xx468" (OuterVolumeSpecName: "kube-api-access-xx468") pod "0fe9de3f-2c70-4bd8-aadf-410c3eb35259" (UID: "0fe9de3f-2c70-4bd8-aadf-410c3eb35259"). InnerVolumeSpecName "kube-api-access-xx468". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:30:27 crc kubenswrapper[4811]: E0219 10:30:27.045757 4811 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2505d5161928a585652b9fbf661b0d4c9d170e5ecf7bf8585256977079bbd621" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 10:30:27 crc kubenswrapper[4811]: E0219 10:30:27.045838 4811 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="79d85ecb-0aba-485f-a679-452a09aed5d4" containerName="nova-scheduler-scheduler" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.064274 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fe9de3f-2c70-4bd8-aadf-410c3eb35259-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0fe9de3f-2c70-4bd8-aadf-410c3eb35259" (UID: "0fe9de3f-2c70-4bd8-aadf-410c3eb35259"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.071082 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fe9de3f-2c70-4bd8-aadf-410c3eb35259-config-data" (OuterVolumeSpecName: "config-data") pod "0fe9de3f-2c70-4bd8-aadf-410c3eb35259" (UID: "0fe9de3f-2c70-4bd8-aadf-410c3eb35259"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.093224 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fe9de3f-2c70-4bd8-aadf-410c3eb35259-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0fe9de3f-2c70-4bd8-aadf-410c3eb35259" (UID: "0fe9de3f-2c70-4bd8-aadf-410c3eb35259"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.102034 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fe9de3f-2c70-4bd8-aadf-410c3eb35259-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0fe9de3f-2c70-4bd8-aadf-410c3eb35259" (UID: "0fe9de3f-2c70-4bd8-aadf-410c3eb35259"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.140526 4811 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fe9de3f-2c70-4bd8-aadf-410c3eb35259-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.140566 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xx468\" (UniqueName: \"kubernetes.io/projected/0fe9de3f-2c70-4bd8-aadf-410c3eb35259-kube-api-access-xx468\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.140580 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fe9de3f-2c70-4bd8-aadf-410c3eb35259-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.140592 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fe9de3f-2c70-4bd8-aadf-410c3eb35259-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.140604 4811 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fe9de3f-2c70-4bd8-aadf-410c3eb35259-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.206509 4811 generic.go:334] "Generic (PLEG): container finished" podID="0fe9de3f-2c70-4bd8-aadf-410c3eb35259" containerID="08d2c16cbf05bf6e9187c159e8e1d7f07c02b1f93dade620015e703a7f49fb5e" exitCode=0 Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.206560 4811 generic.go:334] "Generic (PLEG): container finished" podID="0fe9de3f-2c70-4bd8-aadf-410c3eb35259" containerID="d0dbd2dee0f91b92367736c350abfcff49c7b2182d09e663949412502da17e26" exitCode=143 Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.206575 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.206596 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0fe9de3f-2c70-4bd8-aadf-410c3eb35259","Type":"ContainerDied","Data":"08d2c16cbf05bf6e9187c159e8e1d7f07c02b1f93dade620015e703a7f49fb5e"} Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.206673 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0fe9de3f-2c70-4bd8-aadf-410c3eb35259","Type":"ContainerDied","Data":"d0dbd2dee0f91b92367736c350abfcff49c7b2182d09e663949412502da17e26"} Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.206686 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0fe9de3f-2c70-4bd8-aadf-410c3eb35259","Type":"ContainerDied","Data":"b10b4ec9e00945a4221400670a9ee9aeba314ccf7353961be09c4dade6c59c14"} Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.206706 4811 scope.go:117] "RemoveContainer" containerID="08d2c16cbf05bf6e9187c159e8e1d7f07c02b1f93dade620015e703a7f49fb5e" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.211408 4811 generic.go:334] "Generic (PLEG): container finished" podID="a1e765f8-1094-4b4e-a5a2-5dc86e28c08d" containerID="dd32eec941c4e991f298fa078cd0864b1db0f3b062ca4f5520f8c9ed7331e8b0" exitCode=143 Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.211463 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a1e765f8-1094-4b4e-a5a2-5dc86e28c08d","Type":"ContainerDied","Data":"dd32eec941c4e991f298fa078cd0864b1db0f3b062ca4f5520f8c9ed7331e8b0"} Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.248558 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.254871 4811 scope.go:117] "RemoveContainer" containerID="d0dbd2dee0f91b92367736c350abfcff49c7b2182d09e663949412502da17e26" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.265164 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.284823 4811 scope.go:117] "RemoveContainer" containerID="08d2c16cbf05bf6e9187c159e8e1d7f07c02b1f93dade620015e703a7f49fb5e" Feb 19 10:30:27 crc kubenswrapper[4811]: E0219 10:30:27.286287 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08d2c16cbf05bf6e9187c159e8e1d7f07c02b1f93dade620015e703a7f49fb5e\": container with ID starting with 08d2c16cbf05bf6e9187c159e8e1d7f07c02b1f93dade620015e703a7f49fb5e not found: ID does not exist" containerID="08d2c16cbf05bf6e9187c159e8e1d7f07c02b1f93dade620015e703a7f49fb5e" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.286351 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08d2c16cbf05bf6e9187c159e8e1d7f07c02b1f93dade620015e703a7f49fb5e"} err="failed to get container status \"08d2c16cbf05bf6e9187c159e8e1d7f07c02b1f93dade620015e703a7f49fb5e\": rpc error: code = NotFound desc = could not find container \"08d2c16cbf05bf6e9187c159e8e1d7f07c02b1f93dade620015e703a7f49fb5e\": container with ID starting with 08d2c16cbf05bf6e9187c159e8e1d7f07c02b1f93dade620015e703a7f49fb5e not found: ID does not exist" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.286378 4811 scope.go:117] "RemoveContainer" containerID="d0dbd2dee0f91b92367736c350abfcff49c7b2182d09e663949412502da17e26" Feb 19 10:30:27 crc kubenswrapper[4811]: E0219 10:30:27.287757 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0dbd2dee0f91b92367736c350abfcff49c7b2182d09e663949412502da17e26\": container with ID starting with d0dbd2dee0f91b92367736c350abfcff49c7b2182d09e663949412502da17e26 not found: ID does not exist" containerID="d0dbd2dee0f91b92367736c350abfcff49c7b2182d09e663949412502da17e26" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.287797 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0dbd2dee0f91b92367736c350abfcff49c7b2182d09e663949412502da17e26"} err="failed to get container status \"d0dbd2dee0f91b92367736c350abfcff49c7b2182d09e663949412502da17e26\": rpc error: code = NotFound desc = could not find container \"d0dbd2dee0f91b92367736c350abfcff49c7b2182d09e663949412502da17e26\": container with ID starting with d0dbd2dee0f91b92367736c350abfcff49c7b2182d09e663949412502da17e26 not found: ID does not exist" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.287814 4811 scope.go:117] "RemoveContainer" containerID="08d2c16cbf05bf6e9187c159e8e1d7f07c02b1f93dade620015e703a7f49fb5e" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.288150 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08d2c16cbf05bf6e9187c159e8e1d7f07c02b1f93dade620015e703a7f49fb5e"} err="failed to get container status \"08d2c16cbf05bf6e9187c159e8e1d7f07c02b1f93dade620015e703a7f49fb5e\": rpc error: code = NotFound desc = could not find container \"08d2c16cbf05bf6e9187c159e8e1d7f07c02b1f93dade620015e703a7f49fb5e\": container with ID starting with 08d2c16cbf05bf6e9187c159e8e1d7f07c02b1f93dade620015e703a7f49fb5e not found: ID does not exist" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.288171 4811 scope.go:117] "RemoveContainer" containerID="d0dbd2dee0f91b92367736c350abfcff49c7b2182d09e663949412502da17e26" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.288783 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0dbd2dee0f91b92367736c350abfcff49c7b2182d09e663949412502da17e26"} err="failed to get container status \"d0dbd2dee0f91b92367736c350abfcff49c7b2182d09e663949412502da17e26\": rpc error: code = NotFound desc = could not find container \"d0dbd2dee0f91b92367736c350abfcff49c7b2182d09e663949412502da17e26\": container with ID starting with d0dbd2dee0f91b92367736c350abfcff49c7b2182d09e663949412502da17e26 not found: ID does not exist" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.298839 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 10:30:27 crc kubenswrapper[4811]: E0219 10:30:27.299912 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="140c530d-9fec-4354-bada-2733c806bf1b" containerName="dnsmasq-dns" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.299938 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="140c530d-9fec-4354-bada-2733c806bf1b" containerName="dnsmasq-dns" Feb 19 10:30:27 crc kubenswrapper[4811]: E0219 10:30:27.299994 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fe9de3f-2c70-4bd8-aadf-410c3eb35259" containerName="nova-api-api" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.300004 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fe9de3f-2c70-4bd8-aadf-410c3eb35259" containerName="nova-api-api" Feb 19 10:30:27 crc kubenswrapper[4811]: E0219 10:30:27.300025 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fe9de3f-2c70-4bd8-aadf-410c3eb35259" containerName="nova-api-log" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.300032 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fe9de3f-2c70-4bd8-aadf-410c3eb35259" containerName="nova-api-log" Feb 19 10:30:27 crc kubenswrapper[4811]: E0219 10:30:27.300051 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="140c530d-9fec-4354-bada-2733c806bf1b" containerName="init" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.300076 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="140c530d-9fec-4354-bada-2733c806bf1b" containerName="init" Feb 19 10:30:27 crc kubenswrapper[4811]: E0219 10:30:27.300119 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b0af941-4e43-49e9-903b-438680c89e3c" containerName="nova-manage" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.300128 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b0af941-4e43-49e9-903b-438680c89e3c" containerName="nova-manage" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.300700 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b0af941-4e43-49e9-903b-438680c89e3c" containerName="nova-manage" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.300735 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fe9de3f-2c70-4bd8-aadf-410c3eb35259" containerName="nova-api-api" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.300761 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="140c530d-9fec-4354-bada-2733c806bf1b" containerName="dnsmasq-dns" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.300781 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fe9de3f-2c70-4bd8-aadf-410c3eb35259" containerName="nova-api-log" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.303671 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.312284 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.312590 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.312656 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.319502 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.343570 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2746bf91-50ba-4dc9-a6cf-1f7117f48ee2-config-data\") pod \"nova-api-0\" (UID: \"2746bf91-50ba-4dc9-a6cf-1f7117f48ee2\") " pod="openstack/nova-api-0" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.343851 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2746bf91-50ba-4dc9-a6cf-1f7117f48ee2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2746bf91-50ba-4dc9-a6cf-1f7117f48ee2\") " pod="openstack/nova-api-0" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.343946 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2746bf91-50ba-4dc9-a6cf-1f7117f48ee2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2746bf91-50ba-4dc9-a6cf-1f7117f48ee2\") " pod="openstack/nova-api-0" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.344033 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2746bf91-50ba-4dc9-a6cf-1f7117f48ee2-logs\") pod \"nova-api-0\" (UID: \"2746bf91-50ba-4dc9-a6cf-1f7117f48ee2\") " pod="openstack/nova-api-0" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.344190 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk99g\" (UniqueName: \"kubernetes.io/projected/2746bf91-50ba-4dc9-a6cf-1f7117f48ee2-kube-api-access-dk99g\") pod \"nova-api-0\" (UID: \"2746bf91-50ba-4dc9-a6cf-1f7117f48ee2\") " pod="openstack/nova-api-0" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.344359 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2746bf91-50ba-4dc9-a6cf-1f7117f48ee2-public-tls-certs\") pod \"nova-api-0\" (UID: \"2746bf91-50ba-4dc9-a6cf-1f7117f48ee2\") " pod="openstack/nova-api-0" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.446111 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2746bf91-50ba-4dc9-a6cf-1f7117f48ee2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2746bf91-50ba-4dc9-a6cf-1f7117f48ee2\") " pod="openstack/nova-api-0" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.446584 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2746bf91-50ba-4dc9-a6cf-1f7117f48ee2-logs\") pod \"nova-api-0\" (UID: \"2746bf91-50ba-4dc9-a6cf-1f7117f48ee2\") " pod="openstack/nova-api-0" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.446791 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk99g\" (UniqueName: \"kubernetes.io/projected/2746bf91-50ba-4dc9-a6cf-1f7117f48ee2-kube-api-access-dk99g\") pod \"nova-api-0\" (UID: \"2746bf91-50ba-4dc9-a6cf-1f7117f48ee2\") " pod="openstack/nova-api-0" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.447211 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2746bf91-50ba-4dc9-a6cf-1f7117f48ee2-logs\") pod \"nova-api-0\" (UID: \"2746bf91-50ba-4dc9-a6cf-1f7117f48ee2\") " pod="openstack/nova-api-0" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.447697 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2746bf91-50ba-4dc9-a6cf-1f7117f48ee2-public-tls-certs\") pod \"nova-api-0\" (UID: \"2746bf91-50ba-4dc9-a6cf-1f7117f48ee2\") " pod="openstack/nova-api-0" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.448243 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2746bf91-50ba-4dc9-a6cf-1f7117f48ee2-config-data\") pod \"nova-api-0\" (UID: \"2746bf91-50ba-4dc9-a6cf-1f7117f48ee2\") " pod="openstack/nova-api-0" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.448499 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2746bf91-50ba-4dc9-a6cf-1f7117f48ee2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2746bf91-50ba-4dc9-a6cf-1f7117f48ee2\") " pod="openstack/nova-api-0" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.453331 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2746bf91-50ba-4dc9-a6cf-1f7117f48ee2-config-data\") pod \"nova-api-0\" (UID: \"2746bf91-50ba-4dc9-a6cf-1f7117f48ee2\") " pod="openstack/nova-api-0" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.453391 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2746bf91-50ba-4dc9-a6cf-1f7117f48ee2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2746bf91-50ba-4dc9-a6cf-1f7117f48ee2\") " pod="openstack/nova-api-0" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.453774 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2746bf91-50ba-4dc9-a6cf-1f7117f48ee2-public-tls-certs\") pod \"nova-api-0\" (UID: \"2746bf91-50ba-4dc9-a6cf-1f7117f48ee2\") " pod="openstack/nova-api-0" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.453969 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2746bf91-50ba-4dc9-a6cf-1f7117f48ee2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2746bf91-50ba-4dc9-a6cf-1f7117f48ee2\") " pod="openstack/nova-api-0" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.467430 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk99g\" (UniqueName: \"kubernetes.io/projected/2746bf91-50ba-4dc9-a6cf-1f7117f48ee2-kube-api-access-dk99g\") pod \"nova-api-0\" (UID: \"2746bf91-50ba-4dc9-a6cf-1f7117f48ee2\") " pod="openstack/nova-api-0" Feb 19 10:30:27 crc kubenswrapper[4811]: I0219 10:30:27.625773 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 10:30:28 crc kubenswrapper[4811]: I0219 10:30:28.082541 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 10:30:28 crc kubenswrapper[4811]: I0219 10:30:28.228273 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2746bf91-50ba-4dc9-a6cf-1f7117f48ee2","Type":"ContainerStarted","Data":"bace18a2ceaa386b638ec7eda5dd0c70dfdfc8ac16deffbd2059343e2a7d8c08"} Feb 19 10:30:28 crc kubenswrapper[4811]: I0219 10:30:28.606202 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fe9de3f-2c70-4bd8-aadf-410c3eb35259" path="/var/lib/kubelet/pods/0fe9de3f-2c70-4bd8-aadf-410c3eb35259/volumes" Feb 19 10:30:29 crc kubenswrapper[4811]: I0219 10:30:29.239790 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2746bf91-50ba-4dc9-a6cf-1f7117f48ee2","Type":"ContainerStarted","Data":"1d0cb1cf8c1f93f010d4da1059dbbaef6874c32bff9b690274d7cddb9ed6917a"} Feb 19 10:30:29 crc kubenswrapper[4811]: I0219 10:30:29.239834 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2746bf91-50ba-4dc9-a6cf-1f7117f48ee2","Type":"ContainerStarted","Data":"d971320234fba5de011b0dd58381d279fec01d575cb758dda96183825dd61626"} Feb 19 10:30:29 crc kubenswrapper[4811]: I0219 10:30:29.270544 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.270528189 podStartE2EDuration="2.270528189s" podCreationTimestamp="2026-02-19 10:30:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:30:29.263436518 +0000 UTC m=+1317.789901204" watchObservedRunningTime="2026-02-19 10:30:29.270528189 +0000 UTC m=+1317.796992875" Feb 19 10:30:29 crc kubenswrapper[4811]: I0219 10:30:29.586462 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="a1e765f8-1094-4b4e-a5a2-5dc86e28c08d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": read tcp 10.217.0.2:60400->10.217.0.196:8775: read: connection reset by peer" Feb 19 10:30:29 crc kubenswrapper[4811]: I0219 10:30:29.587295 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="a1e765f8-1094-4b4e-a5a2-5dc86e28c08d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": read tcp 10.217.0.2:60388->10.217.0.196:8775: read: connection reset by peer" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.064694 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.105520 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gg5c\" (UniqueName: \"kubernetes.io/projected/a1e765f8-1094-4b4e-a5a2-5dc86e28c08d-kube-api-access-7gg5c\") pod \"a1e765f8-1094-4b4e-a5a2-5dc86e28c08d\" (UID: \"a1e765f8-1094-4b4e-a5a2-5dc86e28c08d\") " Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.105674 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1e765f8-1094-4b4e-a5a2-5dc86e28c08d-nova-metadata-tls-certs\") pod \"a1e765f8-1094-4b4e-a5a2-5dc86e28c08d\" (UID: \"a1e765f8-1094-4b4e-a5a2-5dc86e28c08d\") " Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.105838 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e765f8-1094-4b4e-a5a2-5dc86e28c08d-combined-ca-bundle\") pod \"a1e765f8-1094-4b4e-a5a2-5dc86e28c08d\" (UID: \"a1e765f8-1094-4b4e-a5a2-5dc86e28c08d\") " Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.106016 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1e765f8-1094-4b4e-a5a2-5dc86e28c08d-logs\") pod \"a1e765f8-1094-4b4e-a5a2-5dc86e28c08d\" (UID: \"a1e765f8-1094-4b4e-a5a2-5dc86e28c08d\") " Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.106065 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e765f8-1094-4b4e-a5a2-5dc86e28c08d-config-data\") pod \"a1e765f8-1094-4b4e-a5a2-5dc86e28c08d\" (UID: \"a1e765f8-1094-4b4e-a5a2-5dc86e28c08d\") " Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.108972 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1e765f8-1094-4b4e-a5a2-5dc86e28c08d-logs" (OuterVolumeSpecName: "logs") pod "a1e765f8-1094-4b4e-a5a2-5dc86e28c08d" (UID: "a1e765f8-1094-4b4e-a5a2-5dc86e28c08d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.119021 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1e765f8-1094-4b4e-a5a2-5dc86e28c08d-kube-api-access-7gg5c" (OuterVolumeSpecName: "kube-api-access-7gg5c") pod "a1e765f8-1094-4b4e-a5a2-5dc86e28c08d" (UID: "a1e765f8-1094-4b4e-a5a2-5dc86e28c08d"). InnerVolumeSpecName "kube-api-access-7gg5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.147738 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1e765f8-1094-4b4e-a5a2-5dc86e28c08d-config-data" (OuterVolumeSpecName: "config-data") pod "a1e765f8-1094-4b4e-a5a2-5dc86e28c08d" (UID: "a1e765f8-1094-4b4e-a5a2-5dc86e28c08d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.160760 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1e765f8-1094-4b4e-a5a2-5dc86e28c08d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1e765f8-1094-4b4e-a5a2-5dc86e28c08d" (UID: "a1e765f8-1094-4b4e-a5a2-5dc86e28c08d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.206117 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1e765f8-1094-4b4e-a5a2-5dc86e28c08d-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "a1e765f8-1094-4b4e-a5a2-5dc86e28c08d" (UID: "a1e765f8-1094-4b4e-a5a2-5dc86e28c08d"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.208003 4811 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1e765f8-1094-4b4e-a5a2-5dc86e28c08d-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.208156 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e765f8-1094-4b4e-a5a2-5dc86e28c08d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.208174 4811 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1e765f8-1094-4b4e-a5a2-5dc86e28c08d-logs\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.208184 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e765f8-1094-4b4e-a5a2-5dc86e28c08d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.208192 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gg5c\" (UniqueName: \"kubernetes.io/projected/a1e765f8-1094-4b4e-a5a2-5dc86e28c08d-kube-api-access-7gg5c\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.259026 4811 generic.go:334] "Generic (PLEG): container finished" podID="a1e765f8-1094-4b4e-a5a2-5dc86e28c08d" containerID="b2c9f892450efb4364ca9adcac51196aaa7405233147e8e46cf7fda73960088e" exitCode=0 Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.261267 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.261909 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a1e765f8-1094-4b4e-a5a2-5dc86e28c08d","Type":"ContainerDied","Data":"b2c9f892450efb4364ca9adcac51196aaa7405233147e8e46cf7fda73960088e"} Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.261951 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a1e765f8-1094-4b4e-a5a2-5dc86e28c08d","Type":"ContainerDied","Data":"4ec2080b166da2ccd3ddbeb420930ebd340ac2ebea10ddcc3545479b208a0413"} Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.261973 4811 scope.go:117] "RemoveContainer" containerID="b2c9f892450efb4364ca9adcac51196aaa7405233147e8e46cf7fda73960088e" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.296466 4811 scope.go:117] "RemoveContainer" containerID="dd32eec941c4e991f298fa078cd0864b1db0f3b062ca4f5520f8c9ed7331e8b0" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.309766 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.322606 4811 scope.go:117] "RemoveContainer" containerID="b2c9f892450efb4364ca9adcac51196aaa7405233147e8e46cf7fda73960088e" Feb 19 10:30:30 crc kubenswrapper[4811]: E0219 10:30:30.324062 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2c9f892450efb4364ca9adcac51196aaa7405233147e8e46cf7fda73960088e\": container with ID starting with b2c9f892450efb4364ca9adcac51196aaa7405233147e8e46cf7fda73960088e not found: ID does not exist" containerID="b2c9f892450efb4364ca9adcac51196aaa7405233147e8e46cf7fda73960088e" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.324096 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2c9f892450efb4364ca9adcac51196aaa7405233147e8e46cf7fda73960088e"} err="failed to get container status \"b2c9f892450efb4364ca9adcac51196aaa7405233147e8e46cf7fda73960088e\": rpc error: code = NotFound desc = could not find container \"b2c9f892450efb4364ca9adcac51196aaa7405233147e8e46cf7fda73960088e\": container with ID starting with b2c9f892450efb4364ca9adcac51196aaa7405233147e8e46cf7fda73960088e not found: ID does not exist" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.324122 4811 scope.go:117] "RemoveContainer" containerID="dd32eec941c4e991f298fa078cd0864b1db0f3b062ca4f5520f8c9ed7331e8b0" Feb 19 10:30:30 crc kubenswrapper[4811]: E0219 10:30:30.325764 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd32eec941c4e991f298fa078cd0864b1db0f3b062ca4f5520f8c9ed7331e8b0\": container with ID starting with dd32eec941c4e991f298fa078cd0864b1db0f3b062ca4f5520f8c9ed7331e8b0 not found: ID does not exist" containerID="dd32eec941c4e991f298fa078cd0864b1db0f3b062ca4f5520f8c9ed7331e8b0" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.325808 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd32eec941c4e991f298fa078cd0864b1db0f3b062ca4f5520f8c9ed7331e8b0"} err="failed to get container status \"dd32eec941c4e991f298fa078cd0864b1db0f3b062ca4f5520f8c9ed7331e8b0\": rpc error: code = NotFound desc = could not find container \"dd32eec941c4e991f298fa078cd0864b1db0f3b062ca4f5520f8c9ed7331e8b0\": container with ID starting with dd32eec941c4e991f298fa078cd0864b1db0f3b062ca4f5520f8c9ed7331e8b0 not found: ID does not exist" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.335072 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.345624 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:30:30 crc kubenswrapper[4811]: E0219 10:30:30.346231 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1e765f8-1094-4b4e-a5a2-5dc86e28c08d" containerName="nova-metadata-log" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.346247 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1e765f8-1094-4b4e-a5a2-5dc86e28c08d" containerName="nova-metadata-log" Feb 19 10:30:30 crc kubenswrapper[4811]: E0219 10:30:30.346265 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1e765f8-1094-4b4e-a5a2-5dc86e28c08d" containerName="nova-metadata-metadata" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.346271 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1e765f8-1094-4b4e-a5a2-5dc86e28c08d" containerName="nova-metadata-metadata" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.346466 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1e765f8-1094-4b4e-a5a2-5dc86e28c08d" containerName="nova-metadata-log" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.346477 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1e765f8-1094-4b4e-a5a2-5dc86e28c08d" containerName="nova-metadata-metadata" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.347658 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.356738 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.357068 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.364846 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.412608 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e32654e-c645-4e60-9879-73ed179fa105-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2e32654e-c645-4e60-9879-73ed179fa105\") " pod="openstack/nova-metadata-0" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.413489 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fh7q\" (UniqueName: \"kubernetes.io/projected/2e32654e-c645-4e60-9879-73ed179fa105-kube-api-access-8fh7q\") pod \"nova-metadata-0\" (UID: \"2e32654e-c645-4e60-9879-73ed179fa105\") " pod="openstack/nova-metadata-0" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.413651 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e32654e-c645-4e60-9879-73ed179fa105-logs\") pod \"nova-metadata-0\" (UID: \"2e32654e-c645-4e60-9879-73ed179fa105\") " pod="openstack/nova-metadata-0" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.413798 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e32654e-c645-4e60-9879-73ed179fa105-config-data\") pod \"nova-metadata-0\" (UID: \"2e32654e-c645-4e60-9879-73ed179fa105\") " pod="openstack/nova-metadata-0" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.413919 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e32654e-c645-4e60-9879-73ed179fa105-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2e32654e-c645-4e60-9879-73ed179fa105\") " pod="openstack/nova-metadata-0" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.516112 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e32654e-c645-4e60-9879-73ed179fa105-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2e32654e-c645-4e60-9879-73ed179fa105\") " pod="openstack/nova-metadata-0" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.516328 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fh7q\" (UniqueName: \"kubernetes.io/projected/2e32654e-c645-4e60-9879-73ed179fa105-kube-api-access-8fh7q\") pod \"nova-metadata-0\" (UID: \"2e32654e-c645-4e60-9879-73ed179fa105\") " pod="openstack/nova-metadata-0" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.516410 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e32654e-c645-4e60-9879-73ed179fa105-logs\") pod \"nova-metadata-0\" (UID: \"2e32654e-c645-4e60-9879-73ed179fa105\") " pod="openstack/nova-metadata-0" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.516641 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e32654e-c645-4e60-9879-73ed179fa105-config-data\") pod \"nova-metadata-0\" (UID: \"2e32654e-c645-4e60-9879-73ed179fa105\") " pod="openstack/nova-metadata-0" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.516743 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e32654e-c645-4e60-9879-73ed179fa105-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2e32654e-c645-4e60-9879-73ed179fa105\") " pod="openstack/nova-metadata-0" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.517332 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e32654e-c645-4e60-9879-73ed179fa105-logs\") pod \"nova-metadata-0\" (UID: \"2e32654e-c645-4e60-9879-73ed179fa105\") " pod="openstack/nova-metadata-0" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.520014 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e32654e-c645-4e60-9879-73ed179fa105-config-data\") pod \"nova-metadata-0\" (UID: \"2e32654e-c645-4e60-9879-73ed179fa105\") " pod="openstack/nova-metadata-0" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.520466 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e32654e-c645-4e60-9879-73ed179fa105-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2e32654e-c645-4e60-9879-73ed179fa105\") " pod="openstack/nova-metadata-0" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.521149 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e32654e-c645-4e60-9879-73ed179fa105-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2e32654e-c645-4e60-9879-73ed179fa105\") " pod="openstack/nova-metadata-0" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.535463 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fh7q\" (UniqueName: \"kubernetes.io/projected/2e32654e-c645-4e60-9879-73ed179fa105-kube-api-access-8fh7q\") pod \"nova-metadata-0\" (UID: \"2e32654e-c645-4e60-9879-73ed179fa105\") " pod="openstack/nova-metadata-0" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.596510 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1e765f8-1094-4b4e-a5a2-5dc86e28c08d" path="/var/lib/kubelet/pods/a1e765f8-1094-4b4e-a5a2-5dc86e28c08d/volumes" Feb 19 10:30:30 crc kubenswrapper[4811]: I0219 10:30:30.666921 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 10:30:31 crc kubenswrapper[4811]: I0219 10:30:31.214630 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 10:30:31 crc kubenswrapper[4811]: I0219 10:30:31.272663 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e32654e-c645-4e60-9879-73ed179fa105","Type":"ContainerStarted","Data":"26e1d2be802f36cee5be0b0769d4fd27c47a1bbd896dbb4a1c63e963c5e0e8e6"} Feb 19 10:30:31 crc kubenswrapper[4811]: I0219 10:30:31.882178 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 10:30:31 crc kubenswrapper[4811]: I0219 10:30:31.984714 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88rq9\" (UniqueName: \"kubernetes.io/projected/79d85ecb-0aba-485f-a679-452a09aed5d4-kube-api-access-88rq9\") pod \"79d85ecb-0aba-485f-a679-452a09aed5d4\" (UID: \"79d85ecb-0aba-485f-a679-452a09aed5d4\") " Feb 19 10:30:31 crc kubenswrapper[4811]: I0219 10:30:31.984851 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d85ecb-0aba-485f-a679-452a09aed5d4-combined-ca-bundle\") pod \"79d85ecb-0aba-485f-a679-452a09aed5d4\" (UID: \"79d85ecb-0aba-485f-a679-452a09aed5d4\") " Feb 19 10:30:31 crc kubenswrapper[4811]: I0219 10:30:31.984931 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d85ecb-0aba-485f-a679-452a09aed5d4-config-data\") pod \"79d85ecb-0aba-485f-a679-452a09aed5d4\" (UID: \"79d85ecb-0aba-485f-a679-452a09aed5d4\") " Feb 19 10:30:31 crc kubenswrapper[4811]: I0219 10:30:31.991252 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79d85ecb-0aba-485f-a679-452a09aed5d4-kube-api-access-88rq9" (OuterVolumeSpecName: "kube-api-access-88rq9") pod "79d85ecb-0aba-485f-a679-452a09aed5d4" (UID: "79d85ecb-0aba-485f-a679-452a09aed5d4"). InnerVolumeSpecName "kube-api-access-88rq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:30:32 crc kubenswrapper[4811]: I0219 10:30:32.015309 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d85ecb-0aba-485f-a679-452a09aed5d4-config-data" (OuterVolumeSpecName: "config-data") pod "79d85ecb-0aba-485f-a679-452a09aed5d4" (UID: "79d85ecb-0aba-485f-a679-452a09aed5d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:30:32 crc kubenswrapper[4811]: I0219 10:30:32.023093 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d85ecb-0aba-485f-a679-452a09aed5d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79d85ecb-0aba-485f-a679-452a09aed5d4" (UID: "79d85ecb-0aba-485f-a679-452a09aed5d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:30:32 crc kubenswrapper[4811]: I0219 10:30:32.087697 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88rq9\" (UniqueName: \"kubernetes.io/projected/79d85ecb-0aba-485f-a679-452a09aed5d4-kube-api-access-88rq9\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:32 crc kubenswrapper[4811]: I0219 10:30:32.087754 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d85ecb-0aba-485f-a679-452a09aed5d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:32 crc kubenswrapper[4811]: I0219 10:30:32.087767 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d85ecb-0aba-485f-a679-452a09aed5d4-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:30:32 crc kubenswrapper[4811]: I0219 10:30:32.287392 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e32654e-c645-4e60-9879-73ed179fa105","Type":"ContainerStarted","Data":"512f19fd3a2b47648dde6e887b54d460fc85e2ae4b12855555261996dcc5b158"} Feb 19 10:30:32 crc kubenswrapper[4811]: I0219 10:30:32.287465 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2e32654e-c645-4e60-9879-73ed179fa105","Type":"ContainerStarted","Data":"0240239b7ede6976f263c431dbd21b2c75c584d1e909ec24f3b6c6f9dd2462fa"} Feb 19 10:30:32 crc kubenswrapper[4811]: I0219 10:30:32.290584 4811 generic.go:334] "Generic (PLEG): container finished" podID="79d85ecb-0aba-485f-a679-452a09aed5d4" containerID="2505d5161928a585652b9fbf661b0d4c9d170e5ecf7bf8585256977079bbd621" exitCode=0 Feb 19 10:30:32 crc kubenswrapper[4811]: I0219 10:30:32.290651 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"79d85ecb-0aba-485f-a679-452a09aed5d4","Type":"ContainerDied","Data":"2505d5161928a585652b9fbf661b0d4c9d170e5ecf7bf8585256977079bbd621"} Feb 19 10:30:32 crc kubenswrapper[4811]: I0219 10:30:32.290696 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"79d85ecb-0aba-485f-a679-452a09aed5d4","Type":"ContainerDied","Data":"ad72480d8eae9ae29343e76d69705421daf195630403f7001e2922c310074732"} Feb 19 10:30:32 crc kubenswrapper[4811]: I0219 10:30:32.290720 4811 scope.go:117] "RemoveContainer" containerID="2505d5161928a585652b9fbf661b0d4c9d170e5ecf7bf8585256977079bbd621" Feb 19 10:30:32 crc kubenswrapper[4811]: I0219 10:30:32.290860 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 10:30:32 crc kubenswrapper[4811]: I0219 10:30:32.318313 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.318288324 podStartE2EDuration="2.318288324s" podCreationTimestamp="2026-02-19 10:30:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:30:32.31539587 +0000 UTC m=+1320.841860556" watchObservedRunningTime="2026-02-19 10:30:32.318288324 +0000 UTC m=+1320.844753010" Feb 19 10:30:32 crc kubenswrapper[4811]: I0219 10:30:32.320312 4811 scope.go:117] "RemoveContainer" containerID="2505d5161928a585652b9fbf661b0d4c9d170e5ecf7bf8585256977079bbd621" Feb 19 10:30:32 crc kubenswrapper[4811]: E0219 10:30:32.321156 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2505d5161928a585652b9fbf661b0d4c9d170e5ecf7bf8585256977079bbd621\": container with ID starting with 2505d5161928a585652b9fbf661b0d4c9d170e5ecf7bf8585256977079bbd621 not found: ID does not exist" containerID="2505d5161928a585652b9fbf661b0d4c9d170e5ecf7bf8585256977079bbd621" Feb 19 10:30:32 crc kubenswrapper[4811]: I0219 10:30:32.321198 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2505d5161928a585652b9fbf661b0d4c9d170e5ecf7bf8585256977079bbd621"} err="failed to get container status \"2505d5161928a585652b9fbf661b0d4c9d170e5ecf7bf8585256977079bbd621\": rpc error: code = NotFound desc = could not find container \"2505d5161928a585652b9fbf661b0d4c9d170e5ecf7bf8585256977079bbd621\": container with ID starting with 2505d5161928a585652b9fbf661b0d4c9d170e5ecf7bf8585256977079bbd621 not found: ID does not exist" Feb 19 10:30:32 crc kubenswrapper[4811]: I0219 10:30:32.344491 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:30:32 crc kubenswrapper[4811]: I0219 10:30:32.352468 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:30:32 crc kubenswrapper[4811]: I0219 10:30:32.363181 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:30:32 crc kubenswrapper[4811]: E0219 10:30:32.363948 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79d85ecb-0aba-485f-a679-452a09aed5d4" containerName="nova-scheduler-scheduler" Feb 19 10:30:32 crc kubenswrapper[4811]: I0219 10:30:32.363978 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="79d85ecb-0aba-485f-a679-452a09aed5d4" containerName="nova-scheduler-scheduler" Feb 19 10:30:32 crc kubenswrapper[4811]: I0219 10:30:32.364316 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="79d85ecb-0aba-485f-a679-452a09aed5d4" containerName="nova-scheduler-scheduler" Feb 19 10:30:32 crc kubenswrapper[4811]: I0219 10:30:32.365470 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 10:30:32 crc kubenswrapper[4811]: I0219 10:30:32.373411 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:30:32 crc kubenswrapper[4811]: I0219 10:30:32.392135 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/362f5b81-de68-4bce-824c-734e9d18f1f1-config-data\") pod \"nova-scheduler-0\" (UID: \"362f5b81-de68-4bce-824c-734e9d18f1f1\") " pod="openstack/nova-scheduler-0" Feb 19 10:30:32 crc kubenswrapper[4811]: I0219 10:30:32.392402 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6wk7\" (UniqueName: \"kubernetes.io/projected/362f5b81-de68-4bce-824c-734e9d18f1f1-kube-api-access-c6wk7\") pod \"nova-scheduler-0\" (UID: \"362f5b81-de68-4bce-824c-734e9d18f1f1\") " pod="openstack/nova-scheduler-0" Feb 19 10:30:32 crc kubenswrapper[4811]: I0219 10:30:32.392498 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/362f5b81-de68-4bce-824c-734e9d18f1f1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"362f5b81-de68-4bce-824c-734e9d18f1f1\") " pod="openstack/nova-scheduler-0" Feb 19 10:30:32 crc kubenswrapper[4811]: I0219 10:30:32.402794 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 10:30:32 crc kubenswrapper[4811]: I0219 10:30:32.493920 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6wk7\" (UniqueName: \"kubernetes.io/projected/362f5b81-de68-4bce-824c-734e9d18f1f1-kube-api-access-c6wk7\") pod \"nova-scheduler-0\" (UID: \"362f5b81-de68-4bce-824c-734e9d18f1f1\") " pod="openstack/nova-scheduler-0" Feb 19 10:30:32 crc kubenswrapper[4811]: I0219 10:30:32.494038 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/362f5b81-de68-4bce-824c-734e9d18f1f1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"362f5b81-de68-4bce-824c-734e9d18f1f1\") " pod="openstack/nova-scheduler-0" Feb 19 10:30:32 crc kubenswrapper[4811]: I0219 10:30:32.494079 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/362f5b81-de68-4bce-824c-734e9d18f1f1-config-data\") pod \"nova-scheduler-0\" (UID: \"362f5b81-de68-4bce-824c-734e9d18f1f1\") " pod="openstack/nova-scheduler-0" Feb 19 10:30:32 crc kubenswrapper[4811]: I0219 10:30:32.499937 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/362f5b81-de68-4bce-824c-734e9d18f1f1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"362f5b81-de68-4bce-824c-734e9d18f1f1\") " pod="openstack/nova-scheduler-0" Feb 19 10:30:32 crc kubenswrapper[4811]: I0219 10:30:32.501014 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/362f5b81-de68-4bce-824c-734e9d18f1f1-config-data\") pod \"nova-scheduler-0\" (UID: \"362f5b81-de68-4bce-824c-734e9d18f1f1\") " pod="openstack/nova-scheduler-0" Feb 19 10:30:32 crc kubenswrapper[4811]: I0219 10:30:32.513917 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6wk7\" (UniqueName: \"kubernetes.io/projected/362f5b81-de68-4bce-824c-734e9d18f1f1-kube-api-access-c6wk7\") pod \"nova-scheduler-0\" (UID: \"362f5b81-de68-4bce-824c-734e9d18f1f1\") " pod="openstack/nova-scheduler-0" Feb 19 10:30:32 crc kubenswrapper[4811]: I0219 10:30:32.595384 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79d85ecb-0aba-485f-a679-452a09aed5d4" path="/var/lib/kubelet/pods/79d85ecb-0aba-485f-a679-452a09aed5d4/volumes" Feb 19 10:30:32 crc kubenswrapper[4811]: I0219 10:30:32.720695 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 10:30:33 crc kubenswrapper[4811]: W0219 10:30:33.163479 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod362f5b81_de68_4bce_824c_734e9d18f1f1.slice/crio-ee19a0d87e026b5460420e35b712536bc6e9ef0d0359079d90e764462fec5677 WatchSource:0}: Error finding container ee19a0d87e026b5460420e35b712536bc6e9ef0d0359079d90e764462fec5677: Status 404 returned error can't find the container with id ee19a0d87e026b5460420e35b712536bc6e9ef0d0359079d90e764462fec5677 Feb 19 10:30:33 crc kubenswrapper[4811]: I0219 10:30:33.164980 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 10:30:33 crc kubenswrapper[4811]: I0219 10:30:33.304335 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"362f5b81-de68-4bce-824c-734e9d18f1f1","Type":"ContainerStarted","Data":"ee19a0d87e026b5460420e35b712536bc6e9ef0d0359079d90e764462fec5677"} Feb 19 10:30:34 crc kubenswrapper[4811]: I0219 10:30:34.326987 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"362f5b81-de68-4bce-824c-734e9d18f1f1","Type":"ContainerStarted","Data":"b599abc494516996f929d605c5989e04b5caf3599b7cb6cbb4358a1cc0c53bbe"} Feb 19 10:30:34 crc kubenswrapper[4811]: I0219 10:30:34.352855 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.352833625 podStartE2EDuration="2.352833625s" podCreationTimestamp="2026-02-19 10:30:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:30:34.345773775 +0000 UTC m=+1322.872238461" watchObservedRunningTime="2026-02-19 10:30:34.352833625 +0000 UTC m=+1322.879298311" Feb 19 10:30:35 crc kubenswrapper[4811]: I0219 10:30:35.667083 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 10:30:35 crc kubenswrapper[4811]: I0219 10:30:35.667147 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 10:30:37 crc kubenswrapper[4811]: I0219 10:30:37.626403 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 10:30:37 crc kubenswrapper[4811]: I0219 10:30:37.626830 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 10:30:37 crc kubenswrapper[4811]: I0219 10:30:37.721495 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 10:30:38 crc kubenswrapper[4811]: I0219 10:30:38.638625 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2746bf91-50ba-4dc9-a6cf-1f7117f48ee2" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 10:30:38 crc kubenswrapper[4811]: I0219 10:30:38.638673 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2746bf91-50ba-4dc9-a6cf-1f7117f48ee2" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 10:30:40 crc kubenswrapper[4811]: I0219 10:30:40.666976 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 10:30:40 crc kubenswrapper[4811]: I0219 10:30:40.668744 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 10:30:41 crc kubenswrapper[4811]: I0219 10:30:41.681677 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2e32654e-c645-4e60-9879-73ed179fa105" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 10:30:41 crc kubenswrapper[4811]: I0219 10:30:41.681971 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2e32654e-c645-4e60-9879-73ed179fa105" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 10:30:42 crc kubenswrapper[4811]: I0219 10:30:42.721969 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 10:30:42 crc kubenswrapper[4811]: I0219 10:30:42.753217 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 10:30:43 crc kubenswrapper[4811]: I0219 10:30:43.444457 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 10:30:44 crc kubenswrapper[4811]: I0219 10:30:44.428333 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 19 10:30:47 crc kubenswrapper[4811]: I0219 10:30:47.633760 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 10:30:47 crc kubenswrapper[4811]: I0219 10:30:47.635991 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 10:30:47 crc kubenswrapper[4811]: I0219 10:30:47.637518 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 10:30:47 crc kubenswrapper[4811]: I0219 10:30:47.642541 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 10:30:48 crc kubenswrapper[4811]: I0219 10:30:48.463789 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 10:30:48 crc kubenswrapper[4811]: I0219 10:30:48.471418 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 10:30:50 crc kubenswrapper[4811]: I0219 10:30:50.674681 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 10:30:50 crc kubenswrapper[4811]: I0219 10:30:50.678965 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 10:30:50 crc kubenswrapper[4811]: I0219 10:30:50.686839 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 10:30:51 crc kubenswrapper[4811]: I0219 10:30:51.495571 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 10:30:59 crc kubenswrapper[4811]: I0219 10:30:59.199551 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 10:31:00 crc kubenswrapper[4811]: I0219 10:31:00.131102 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 10:31:04 crc kubenswrapper[4811]: I0219 10:31:04.362650 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="f263c1a8-dfa3-4383-849c-bde314ae5503" containerName="rabbitmq" containerID="cri-o://06aff5506c30621032dd59bbf567a788f7754e5d787a946942c8f30810ae381b" gracePeriod=604795 Feb 19 10:31:05 crc kubenswrapper[4811]: I0219 10:31:05.098718 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="d7378877-24bd-40fc-89c8-6bc6115040a2" containerName="rabbitmq" containerID="cri-o://3aa5b3ed12c1474392602f778fb91ebade5465a969eaff3e1726fba5007fa711" gracePeriod=604796 Feb 19 10:31:10 crc kubenswrapper[4811]: I0219 10:31:10.228246 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="f263c1a8-dfa3-4383-849c-bde314ae5503" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Feb 19 10:31:10 crc kubenswrapper[4811]: I0219 10:31:10.548760 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="d7378877-24bd-40fc-89c8-6bc6115040a2" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Feb 19 10:31:10 crc kubenswrapper[4811]: I0219 10:31:10.679155 4811 generic.go:334] "Generic (PLEG): container finished" podID="f263c1a8-dfa3-4383-849c-bde314ae5503" containerID="06aff5506c30621032dd59bbf567a788f7754e5d787a946942c8f30810ae381b" exitCode=0 Feb 19 10:31:10 crc kubenswrapper[4811]: I0219 10:31:10.679194 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f263c1a8-dfa3-4383-849c-bde314ae5503","Type":"ContainerDied","Data":"06aff5506c30621032dd59bbf567a788f7754e5d787a946942c8f30810ae381b"} Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.018668 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.120693 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f263c1a8-dfa3-4383-849c-bde314ae5503-pod-info\") pod \"f263c1a8-dfa3-4383-849c-bde314ae5503\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.120750 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"f263c1a8-dfa3-4383-849c-bde314ae5503\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.120813 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f263c1a8-dfa3-4383-849c-bde314ae5503-rabbitmq-erlang-cookie\") pod \"f263c1a8-dfa3-4383-849c-bde314ae5503\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.120898 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f263c1a8-dfa3-4383-849c-bde314ae5503-config-data\") pod \"f263c1a8-dfa3-4383-849c-bde314ae5503\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.120997 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f263c1a8-dfa3-4383-849c-bde314ae5503-plugins-conf\") pod \"f263c1a8-dfa3-4383-849c-bde314ae5503\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.121055 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f263c1a8-dfa3-4383-849c-bde314ae5503-server-conf\") pod \"f263c1a8-dfa3-4383-849c-bde314ae5503\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.121092 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kphf\" (UniqueName: \"kubernetes.io/projected/f263c1a8-dfa3-4383-849c-bde314ae5503-kube-api-access-5kphf\") pod \"f263c1a8-dfa3-4383-849c-bde314ae5503\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.121124 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f263c1a8-dfa3-4383-849c-bde314ae5503-rabbitmq-confd\") pod \"f263c1a8-dfa3-4383-849c-bde314ae5503\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.121194 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f263c1a8-dfa3-4383-849c-bde314ae5503-erlang-cookie-secret\") pod \"f263c1a8-dfa3-4383-849c-bde314ae5503\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.121230 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f263c1a8-dfa3-4383-849c-bde314ae5503-rabbitmq-plugins\") pod \"f263c1a8-dfa3-4383-849c-bde314ae5503\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.121263 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f263c1a8-dfa3-4383-849c-bde314ae5503-rabbitmq-tls\") pod \"f263c1a8-dfa3-4383-849c-bde314ae5503\" (UID: \"f263c1a8-dfa3-4383-849c-bde314ae5503\") " Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.121599 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f263c1a8-dfa3-4383-849c-bde314ae5503-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f263c1a8-dfa3-4383-849c-bde314ae5503" (UID: "f263c1a8-dfa3-4383-849c-bde314ae5503"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.122038 4811 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f263c1a8-dfa3-4383-849c-bde314ae5503-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.122184 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f263c1a8-dfa3-4383-849c-bde314ae5503-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f263c1a8-dfa3-4383-849c-bde314ae5503" (UID: "f263c1a8-dfa3-4383-849c-bde314ae5503"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.123500 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f263c1a8-dfa3-4383-849c-bde314ae5503-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f263c1a8-dfa3-4383-849c-bde314ae5503" (UID: "f263c1a8-dfa3-4383-849c-bde314ae5503"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.132867 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f263c1a8-dfa3-4383-849c-bde314ae5503-pod-info" (OuterVolumeSpecName: "pod-info") pod "f263c1a8-dfa3-4383-849c-bde314ae5503" (UID: "f263c1a8-dfa3-4383-849c-bde314ae5503"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.133226 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f263c1a8-dfa3-4383-849c-bde314ae5503-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f263c1a8-dfa3-4383-849c-bde314ae5503" (UID: "f263c1a8-dfa3-4383-849c-bde314ae5503"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.133284 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f263c1a8-dfa3-4383-849c-bde314ae5503-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f263c1a8-dfa3-4383-849c-bde314ae5503" (UID: "f263c1a8-dfa3-4383-849c-bde314ae5503"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.139705 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "f263c1a8-dfa3-4383-849c-bde314ae5503" (UID: "f263c1a8-dfa3-4383-849c-bde314ae5503"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.145895 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f263c1a8-dfa3-4383-849c-bde314ae5503-kube-api-access-5kphf" (OuterVolumeSpecName: "kube-api-access-5kphf") pod "f263c1a8-dfa3-4383-849c-bde314ae5503" (UID: "f263c1a8-dfa3-4383-849c-bde314ae5503"). InnerVolumeSpecName "kube-api-access-5kphf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.195914 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f263c1a8-dfa3-4383-849c-bde314ae5503-config-data" (OuterVolumeSpecName: "config-data") pod "f263c1a8-dfa3-4383-849c-bde314ae5503" (UID: "f263c1a8-dfa3-4383-849c-bde314ae5503"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.224223 4811 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f263c1a8-dfa3-4383-849c-bde314ae5503-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.224263 4811 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f263c1a8-dfa3-4383-849c-bde314ae5503-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.224275 4811 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f263c1a8-dfa3-4383-849c-bde314ae5503-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.224391 4811 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f263c1a8-dfa3-4383-849c-bde314ae5503-pod-info\") on node \"crc\" DevicePath \"\"" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.224426 4811 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.224437 4811 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f263c1a8-dfa3-4383-849c-bde314ae5503-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.224468 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f263c1a8-dfa3-4383-849c-bde314ae5503-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.224483 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kphf\" (UniqueName: \"kubernetes.io/projected/f263c1a8-dfa3-4383-849c-bde314ae5503-kube-api-access-5kphf\") on node \"crc\" DevicePath \"\"" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.251465 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f263c1a8-dfa3-4383-849c-bde314ae5503-server-conf" (OuterVolumeSpecName: "server-conf") pod "f263c1a8-dfa3-4383-849c-bde314ae5503" (UID: "f263c1a8-dfa3-4383-849c-bde314ae5503"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.260207 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-7z2rt"] Feb 19 10:31:11 crc kubenswrapper[4811]: E0219 10:31:11.260585 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f263c1a8-dfa3-4383-849c-bde314ae5503" containerName="rabbitmq" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.260598 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f263c1a8-dfa3-4383-849c-bde314ae5503" containerName="rabbitmq" Feb 19 10:31:11 crc kubenswrapper[4811]: E0219 10:31:11.260654 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f263c1a8-dfa3-4383-849c-bde314ae5503" containerName="setup-container" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.260666 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f263c1a8-dfa3-4383-849c-bde314ae5503" containerName="setup-container" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.260854 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f263c1a8-dfa3-4383-849c-bde314ae5503" containerName="rabbitmq" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.265139 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-7z2rt" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.271589 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.294020 4811 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.294266 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-7z2rt"] Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.305604 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f263c1a8-dfa3-4383-849c-bde314ae5503-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f263c1a8-dfa3-4383-849c-bde314ae5503" (UID: "f263c1a8-dfa3-4383-849c-bde314ae5503"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.325947 4811 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f263c1a8-dfa3-4383-849c-bde314ae5503-server-conf\") on node \"crc\" DevicePath \"\"" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.325986 4811 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f263c1a8-dfa3-4383-849c-bde314ae5503-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.326002 4811 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.427186 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/6fa09a94-1cae-4e90-ade7-1c3ec509d231-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-7z2rt\" (UID: \"6fa09a94-1cae-4e90-ade7-1c3ec509d231\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7z2rt" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.427287 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fa09a94-1cae-4e90-ade7-1c3ec509d231-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-7z2rt\" (UID: \"6fa09a94-1cae-4e90-ade7-1c3ec509d231\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7z2rt" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.427326 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fa09a94-1cae-4e90-ade7-1c3ec509d231-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-7z2rt\" (UID: \"6fa09a94-1cae-4e90-ade7-1c3ec509d231\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7z2rt" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.427353 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6fa09a94-1cae-4e90-ade7-1c3ec509d231-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-7z2rt\" (UID: \"6fa09a94-1cae-4e90-ade7-1c3ec509d231\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7z2rt" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.427388 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px94f\" (UniqueName: \"kubernetes.io/projected/6fa09a94-1cae-4e90-ade7-1c3ec509d231-kube-api-access-px94f\") pod \"dnsmasq-dns-79bd4cc8c9-7z2rt\" (UID: \"6fa09a94-1cae-4e90-ade7-1c3ec509d231\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7z2rt" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.427482 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fa09a94-1cae-4e90-ade7-1c3ec509d231-config\") pod \"dnsmasq-dns-79bd4cc8c9-7z2rt\" (UID: \"6fa09a94-1cae-4e90-ade7-1c3ec509d231\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7z2rt" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.427567 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6fa09a94-1cae-4e90-ade7-1c3ec509d231-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-7z2rt\" (UID: \"6fa09a94-1cae-4e90-ade7-1c3ec509d231\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7z2rt" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.530392 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/6fa09a94-1cae-4e90-ade7-1c3ec509d231-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-7z2rt\" (UID: \"6fa09a94-1cae-4e90-ade7-1c3ec509d231\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7z2rt" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.530513 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fa09a94-1cae-4e90-ade7-1c3ec509d231-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-7z2rt\" (UID: \"6fa09a94-1cae-4e90-ade7-1c3ec509d231\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7z2rt" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.530545 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fa09a94-1cae-4e90-ade7-1c3ec509d231-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-7z2rt\" (UID: \"6fa09a94-1cae-4e90-ade7-1c3ec509d231\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7z2rt" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.530574 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6fa09a94-1cae-4e90-ade7-1c3ec509d231-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-7z2rt\" (UID: \"6fa09a94-1cae-4e90-ade7-1c3ec509d231\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7z2rt" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.530607 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px94f\" (UniqueName: \"kubernetes.io/projected/6fa09a94-1cae-4e90-ade7-1c3ec509d231-kube-api-access-px94f\") pod \"dnsmasq-dns-79bd4cc8c9-7z2rt\" (UID: \"6fa09a94-1cae-4e90-ade7-1c3ec509d231\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7z2rt" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.530637 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fa09a94-1cae-4e90-ade7-1c3ec509d231-config\") pod \"dnsmasq-dns-79bd4cc8c9-7z2rt\" (UID: \"6fa09a94-1cae-4e90-ade7-1c3ec509d231\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7z2rt" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.530698 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6fa09a94-1cae-4e90-ade7-1c3ec509d231-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-7z2rt\" (UID: \"6fa09a94-1cae-4e90-ade7-1c3ec509d231\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7z2rt" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.531764 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/6fa09a94-1cae-4e90-ade7-1c3ec509d231-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-7z2rt\" (UID: \"6fa09a94-1cae-4e90-ade7-1c3ec509d231\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7z2rt" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.531848 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fa09a94-1cae-4e90-ade7-1c3ec509d231-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-7z2rt\" (UID: \"6fa09a94-1cae-4e90-ade7-1c3ec509d231\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7z2rt" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.531856 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6fa09a94-1cae-4e90-ade7-1c3ec509d231-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-7z2rt\" (UID: \"6fa09a94-1cae-4e90-ade7-1c3ec509d231\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7z2rt" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.532259 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fa09a94-1cae-4e90-ade7-1c3ec509d231-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-7z2rt\" (UID: \"6fa09a94-1cae-4e90-ade7-1c3ec509d231\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7z2rt" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.532494 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fa09a94-1cae-4e90-ade7-1c3ec509d231-config\") pod \"dnsmasq-dns-79bd4cc8c9-7z2rt\" (UID: \"6fa09a94-1cae-4e90-ade7-1c3ec509d231\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7z2rt" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.532604 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6fa09a94-1cae-4e90-ade7-1c3ec509d231-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-7z2rt\" (UID: \"6fa09a94-1cae-4e90-ade7-1c3ec509d231\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7z2rt" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.554791 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px94f\" (UniqueName: \"kubernetes.io/projected/6fa09a94-1cae-4e90-ade7-1c3ec509d231-kube-api-access-px94f\") pod \"dnsmasq-dns-79bd4cc8c9-7z2rt\" (UID: \"6fa09a94-1cae-4e90-ade7-1c3ec509d231\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-7z2rt" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.624332 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-7z2rt" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.695368 4811 generic.go:334] "Generic (PLEG): container finished" podID="d7378877-24bd-40fc-89c8-6bc6115040a2" containerID="3aa5b3ed12c1474392602f778fb91ebade5465a969eaff3e1726fba5007fa711" exitCode=0 Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.695467 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d7378877-24bd-40fc-89c8-6bc6115040a2","Type":"ContainerDied","Data":"3aa5b3ed12c1474392602f778fb91ebade5465a969eaff3e1726fba5007fa711"} Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.695496 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d7378877-24bd-40fc-89c8-6bc6115040a2","Type":"ContainerDied","Data":"5e91ae3b4782c822a31d69ac700b3045f54cf67c5797fa0fa3cd88c0a2216b92"} Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.695508 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e91ae3b4782c822a31d69ac700b3045f54cf67c5797fa0fa3cd88c0a2216b92" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.697074 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f263c1a8-dfa3-4383-849c-bde314ae5503","Type":"ContainerDied","Data":"a4175518b21f1289bf245dc659c5bd01d715aa4e534e7ebcae47fad7a436d880"} Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.697105 4811 scope.go:117] "RemoveContainer" containerID="06aff5506c30621032dd59bbf567a788f7754e5d787a946942c8f30810ae381b" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.697253 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.779157 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.792093 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.842657 4811 scope.go:117] "RemoveContainer" containerID="f3e9158665e56adf6d6316c3aa8190c955795799012ad86edea35e82b7ad74e8" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.846345 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.871345 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 10:31:11 crc kubenswrapper[4811]: E0219 10:31:11.871928 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7378877-24bd-40fc-89c8-6bc6115040a2" containerName="setup-container" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.871952 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7378877-24bd-40fc-89c8-6bc6115040a2" containerName="setup-container" Feb 19 10:31:11 crc kubenswrapper[4811]: E0219 10:31:11.872005 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7378877-24bd-40fc-89c8-6bc6115040a2" containerName="rabbitmq" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.872014 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7378877-24bd-40fc-89c8-6bc6115040a2" containerName="rabbitmq" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.872241 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7378877-24bd-40fc-89c8-6bc6115040a2" containerName="rabbitmq" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.873649 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.878629 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.878886 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.879083 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.879239 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-thsf7" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.879432 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.879681 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.881123 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.894784 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.939343 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d7378877-24bd-40fc-89c8-6bc6115040a2-server-conf\") pod \"d7378877-24bd-40fc-89c8-6bc6115040a2\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.939391 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d7378877-24bd-40fc-89c8-6bc6115040a2-plugins-conf\") pod \"d7378877-24bd-40fc-89c8-6bc6115040a2\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.939408 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"d7378877-24bd-40fc-89c8-6bc6115040a2\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.939440 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d7378877-24bd-40fc-89c8-6bc6115040a2-pod-info\") pod \"d7378877-24bd-40fc-89c8-6bc6115040a2\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.939513 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d7378877-24bd-40fc-89c8-6bc6115040a2-rabbitmq-tls\") pod \"d7378877-24bd-40fc-89c8-6bc6115040a2\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.939643 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7378877-24bd-40fc-89c8-6bc6115040a2-config-data\") pod \"d7378877-24bd-40fc-89c8-6bc6115040a2\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.939664 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d7378877-24bd-40fc-89c8-6bc6115040a2-erlang-cookie-secret\") pod \"d7378877-24bd-40fc-89c8-6bc6115040a2\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.939686 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d7378877-24bd-40fc-89c8-6bc6115040a2-rabbitmq-plugins\") pod \"d7378877-24bd-40fc-89c8-6bc6115040a2\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.939720 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wptsg\" (UniqueName: \"kubernetes.io/projected/d7378877-24bd-40fc-89c8-6bc6115040a2-kube-api-access-wptsg\") pod \"d7378877-24bd-40fc-89c8-6bc6115040a2\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.939775 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d7378877-24bd-40fc-89c8-6bc6115040a2-rabbitmq-confd\") pod \"d7378877-24bd-40fc-89c8-6bc6115040a2\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.939803 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d7378877-24bd-40fc-89c8-6bc6115040a2-rabbitmq-erlang-cookie\") pod \"d7378877-24bd-40fc-89c8-6bc6115040a2\" (UID: \"d7378877-24bd-40fc-89c8-6bc6115040a2\") " Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.941027 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7378877-24bd-40fc-89c8-6bc6115040a2-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "d7378877-24bd-40fc-89c8-6bc6115040a2" (UID: "d7378877-24bd-40fc-89c8-6bc6115040a2"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.943101 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7378877-24bd-40fc-89c8-6bc6115040a2-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "d7378877-24bd-40fc-89c8-6bc6115040a2" (UID: "d7378877-24bd-40fc-89c8-6bc6115040a2"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.945091 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7378877-24bd-40fc-89c8-6bc6115040a2-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "d7378877-24bd-40fc-89c8-6bc6115040a2" (UID: "d7378877-24bd-40fc-89c8-6bc6115040a2"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.949236 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7378877-24bd-40fc-89c8-6bc6115040a2-kube-api-access-wptsg" (OuterVolumeSpecName: "kube-api-access-wptsg") pod "d7378877-24bd-40fc-89c8-6bc6115040a2" (UID: "d7378877-24bd-40fc-89c8-6bc6115040a2"). InnerVolumeSpecName "kube-api-access-wptsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.953762 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "d7378877-24bd-40fc-89c8-6bc6115040a2" (UID: "d7378877-24bd-40fc-89c8-6bc6115040a2"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.953788 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7378877-24bd-40fc-89c8-6bc6115040a2-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "d7378877-24bd-40fc-89c8-6bc6115040a2" (UID: "d7378877-24bd-40fc-89c8-6bc6115040a2"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.957008 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/d7378877-24bd-40fc-89c8-6bc6115040a2-pod-info" (OuterVolumeSpecName: "pod-info") pod "d7378877-24bd-40fc-89c8-6bc6115040a2" (UID: "d7378877-24bd-40fc-89c8-6bc6115040a2"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 19 10:31:11 crc kubenswrapper[4811]: I0219 10:31:11.963499 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7378877-24bd-40fc-89c8-6bc6115040a2-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "d7378877-24bd-40fc-89c8-6bc6115040a2" (UID: "d7378877-24bd-40fc-89c8-6bc6115040a2"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.008241 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7378877-24bd-40fc-89c8-6bc6115040a2-config-data" (OuterVolumeSpecName: "config-data") pod "d7378877-24bd-40fc-89c8-6bc6115040a2" (UID: "d7378877-24bd-40fc-89c8-6bc6115040a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.023937 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7378877-24bd-40fc-89c8-6bc6115040a2-server-conf" (OuterVolumeSpecName: "server-conf") pod "d7378877-24bd-40fc-89c8-6bc6115040a2" (UID: "d7378877-24bd-40fc-89c8-6bc6115040a2"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.041425 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c8683fae-0b62-402d-bec3-5c3be97c8c98-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c8683fae-0b62-402d-bec3-5c3be97c8c98\") " pod="openstack/rabbitmq-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.041518 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c8683fae-0b62-402d-bec3-5c3be97c8c98-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c8683fae-0b62-402d-bec3-5c3be97c8c98\") " pod="openstack/rabbitmq-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.041601 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c8683fae-0b62-402d-bec3-5c3be97c8c98-config-data\") pod \"rabbitmq-server-0\" (UID: \"c8683fae-0b62-402d-bec3-5c3be97c8c98\") " pod="openstack/rabbitmq-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.041639 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c8683fae-0b62-402d-bec3-5c3be97c8c98-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c8683fae-0b62-402d-bec3-5c3be97c8c98\") " pod="openstack/rabbitmq-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.041676 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c8683fae-0b62-402d-bec3-5c3be97c8c98-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c8683fae-0b62-402d-bec3-5c3be97c8c98\") " pod="openstack/rabbitmq-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.041716 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c8683fae-0b62-402d-bec3-5c3be97c8c98-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c8683fae-0b62-402d-bec3-5c3be97c8c98\") " pod="openstack/rabbitmq-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.041746 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c8683fae-0b62-402d-bec3-5c3be97c8c98-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c8683fae-0b62-402d-bec3-5c3be97c8c98\") " pod="openstack/rabbitmq-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.042477 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42hbj\" (UniqueName: \"kubernetes.io/projected/c8683fae-0b62-402d-bec3-5c3be97c8c98-kube-api-access-42hbj\") pod \"rabbitmq-server-0\" (UID: \"c8683fae-0b62-402d-bec3-5c3be97c8c98\") " pod="openstack/rabbitmq-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.042610 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"c8683fae-0b62-402d-bec3-5c3be97c8c98\") " pod="openstack/rabbitmq-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.042736 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c8683fae-0b62-402d-bec3-5c3be97c8c98-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c8683fae-0b62-402d-bec3-5c3be97c8c98\") " pod="openstack/rabbitmq-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.042852 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c8683fae-0b62-402d-bec3-5c3be97c8c98-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c8683fae-0b62-402d-bec3-5c3be97c8c98\") " pod="openstack/rabbitmq-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.043443 4811 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d7378877-24bd-40fc-89c8-6bc6115040a2-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.043514 4811 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d7378877-24bd-40fc-89c8-6bc6115040a2-server-conf\") on node \"crc\" DevicePath \"\"" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.043528 4811 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d7378877-24bd-40fc-89c8-6bc6115040a2-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.043557 4811 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.043570 4811 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d7378877-24bd-40fc-89c8-6bc6115040a2-pod-info\") on node \"crc\" DevicePath \"\"" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.043583 4811 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d7378877-24bd-40fc-89c8-6bc6115040a2-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.043596 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7378877-24bd-40fc-89c8-6bc6115040a2-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.043607 4811 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d7378877-24bd-40fc-89c8-6bc6115040a2-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.043615 4811 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d7378877-24bd-40fc-89c8-6bc6115040a2-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.043624 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wptsg\" (UniqueName: \"kubernetes.io/projected/d7378877-24bd-40fc-89c8-6bc6115040a2-kube-api-access-wptsg\") on node \"crc\" DevicePath \"\"" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.064565 4811 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.104732 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7378877-24bd-40fc-89c8-6bc6115040a2-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "d7378877-24bd-40fc-89c8-6bc6115040a2" (UID: "d7378877-24bd-40fc-89c8-6bc6115040a2"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.148489 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c8683fae-0b62-402d-bec3-5c3be97c8c98-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c8683fae-0b62-402d-bec3-5c3be97c8c98\") " pod="openstack/rabbitmq-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.148815 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c8683fae-0b62-402d-bec3-5c3be97c8c98-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c8683fae-0b62-402d-bec3-5c3be97c8c98\") " pod="openstack/rabbitmq-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.148843 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c8683fae-0b62-402d-bec3-5c3be97c8c98-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c8683fae-0b62-402d-bec3-5c3be97c8c98\") " pod="openstack/rabbitmq-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.148879 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c8683fae-0b62-402d-bec3-5c3be97c8c98-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c8683fae-0b62-402d-bec3-5c3be97c8c98\") " pod="openstack/rabbitmq-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.148941 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c8683fae-0b62-402d-bec3-5c3be97c8c98-config-data\") pod \"rabbitmq-server-0\" (UID: \"c8683fae-0b62-402d-bec3-5c3be97c8c98\") " pod="openstack/rabbitmq-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.148968 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c8683fae-0b62-402d-bec3-5c3be97c8c98-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c8683fae-0b62-402d-bec3-5c3be97c8c98\") " pod="openstack/rabbitmq-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.148997 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c8683fae-0b62-402d-bec3-5c3be97c8c98-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c8683fae-0b62-402d-bec3-5c3be97c8c98\") " pod="openstack/rabbitmq-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.149028 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c8683fae-0b62-402d-bec3-5c3be97c8c98-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c8683fae-0b62-402d-bec3-5c3be97c8c98\") " pod="openstack/rabbitmq-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.149051 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c8683fae-0b62-402d-bec3-5c3be97c8c98-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c8683fae-0b62-402d-bec3-5c3be97c8c98\") " pod="openstack/rabbitmq-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.149093 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42hbj\" (UniqueName: \"kubernetes.io/projected/c8683fae-0b62-402d-bec3-5c3be97c8c98-kube-api-access-42hbj\") pod \"rabbitmq-server-0\" (UID: \"c8683fae-0b62-402d-bec3-5c3be97c8c98\") " pod="openstack/rabbitmq-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.149113 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"c8683fae-0b62-402d-bec3-5c3be97c8c98\") " pod="openstack/rabbitmq-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.149174 4811 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d7378877-24bd-40fc-89c8-6bc6115040a2-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.149184 4811 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.149293 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"c8683fae-0b62-402d-bec3-5c3be97c8c98\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.150847 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c8683fae-0b62-402d-bec3-5c3be97c8c98-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c8683fae-0b62-402d-bec3-5c3be97c8c98\") " pod="openstack/rabbitmq-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.153790 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c8683fae-0b62-402d-bec3-5c3be97c8c98-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c8683fae-0b62-402d-bec3-5c3be97c8c98\") " pod="openstack/rabbitmq-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.154478 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c8683fae-0b62-402d-bec3-5c3be97c8c98-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c8683fae-0b62-402d-bec3-5c3be97c8c98\") " pod="openstack/rabbitmq-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.159177 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c8683fae-0b62-402d-bec3-5c3be97c8c98-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c8683fae-0b62-402d-bec3-5c3be97c8c98\") " pod="openstack/rabbitmq-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.165044 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c8683fae-0b62-402d-bec3-5c3be97c8c98-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c8683fae-0b62-402d-bec3-5c3be97c8c98\") " pod="openstack/rabbitmq-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.170099 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c8683fae-0b62-402d-bec3-5c3be97c8c98-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c8683fae-0b62-402d-bec3-5c3be97c8c98\") " pod="openstack/rabbitmq-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.170845 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c8683fae-0b62-402d-bec3-5c3be97c8c98-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c8683fae-0b62-402d-bec3-5c3be97c8c98\") " pod="openstack/rabbitmq-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.172989 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c8683fae-0b62-402d-bec3-5c3be97c8c98-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c8683fae-0b62-402d-bec3-5c3be97c8c98\") " pod="openstack/rabbitmq-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.173205 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c8683fae-0b62-402d-bec3-5c3be97c8c98-config-data\") pod \"rabbitmq-server-0\" (UID: \"c8683fae-0b62-402d-bec3-5c3be97c8c98\") " pod="openstack/rabbitmq-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.183227 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42hbj\" (UniqueName: \"kubernetes.io/projected/c8683fae-0b62-402d-bec3-5c3be97c8c98-kube-api-access-42hbj\") pod \"rabbitmq-server-0\" (UID: \"c8683fae-0b62-402d-bec3-5c3be97c8c98\") " pod="openstack/rabbitmq-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.208819 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"c8683fae-0b62-402d-bec3-5c3be97c8c98\") " pod="openstack/rabbitmq-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.229662 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-7z2rt"] Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.230357 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.610363 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f263c1a8-dfa3-4383-849c-bde314ae5503" path="/var/lib/kubelet/pods/f263c1a8-dfa3-4383-849c-bde314ae5503/volumes" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.708771 4811 generic.go:334] "Generic (PLEG): container finished" podID="6fa09a94-1cae-4e90-ade7-1c3ec509d231" containerID="ad8bb6eb5b70f3f3630faf5d1bb5a6a5cba6036a1d933233ce34c7cf7ce374a9" exitCode=0 Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.708880 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.711247 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-7z2rt" event={"ID":"6fa09a94-1cae-4e90-ade7-1c3ec509d231","Type":"ContainerDied","Data":"ad8bb6eb5b70f3f3630faf5d1bb5a6a5cba6036a1d933233ce34c7cf7ce374a9"} Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.711301 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-7z2rt" event={"ID":"6fa09a94-1cae-4e90-ade7-1c3ec509d231","Type":"ContainerStarted","Data":"0daf82b04d9e43062de26389f459975fad58140ea95602d5f0e87ef06303c1f9"} Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.768530 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.792320 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.819653 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.821627 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.823676 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.824993 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.825115 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.825239 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.830704 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.831085 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-vv7vf" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.831123 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.850584 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.902537 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.977206 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"44cb8367-2f57-4e24-9d48-b2d22a39d128\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.977286 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/44cb8367-2f57-4e24-9d48-b2d22a39d128-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"44cb8367-2f57-4e24-9d48-b2d22a39d128\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.977323 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/44cb8367-2f57-4e24-9d48-b2d22a39d128-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"44cb8367-2f57-4e24-9d48-b2d22a39d128\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.977340 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/44cb8367-2f57-4e24-9d48-b2d22a39d128-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"44cb8367-2f57-4e24-9d48-b2d22a39d128\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.977382 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/44cb8367-2f57-4e24-9d48-b2d22a39d128-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"44cb8367-2f57-4e24-9d48-b2d22a39d128\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.977402 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/44cb8367-2f57-4e24-9d48-b2d22a39d128-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"44cb8367-2f57-4e24-9d48-b2d22a39d128\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.977417 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/44cb8367-2f57-4e24-9d48-b2d22a39d128-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"44cb8367-2f57-4e24-9d48-b2d22a39d128\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.977492 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/44cb8367-2f57-4e24-9d48-b2d22a39d128-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"44cb8367-2f57-4e24-9d48-b2d22a39d128\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.977618 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/44cb8367-2f57-4e24-9d48-b2d22a39d128-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"44cb8367-2f57-4e24-9d48-b2d22a39d128\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.977640 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qft4t\" (UniqueName: \"kubernetes.io/projected/44cb8367-2f57-4e24-9d48-b2d22a39d128-kube-api-access-qft4t\") pod \"rabbitmq-cell1-server-0\" (UID: \"44cb8367-2f57-4e24-9d48-b2d22a39d128\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:31:12 crc kubenswrapper[4811]: I0219 10:31:12.977665 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/44cb8367-2f57-4e24-9d48-b2d22a39d128-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"44cb8367-2f57-4e24-9d48-b2d22a39d128\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:31:13 crc kubenswrapper[4811]: I0219 10:31:13.080162 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/44cb8367-2f57-4e24-9d48-b2d22a39d128-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"44cb8367-2f57-4e24-9d48-b2d22a39d128\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:31:13 crc kubenswrapper[4811]: I0219 10:31:13.080227 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/44cb8367-2f57-4e24-9d48-b2d22a39d128-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"44cb8367-2f57-4e24-9d48-b2d22a39d128\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:31:13 crc kubenswrapper[4811]: I0219 10:31:13.080252 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/44cb8367-2f57-4e24-9d48-b2d22a39d128-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"44cb8367-2f57-4e24-9d48-b2d22a39d128\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:31:13 crc kubenswrapper[4811]: I0219 10:31:13.080291 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/44cb8367-2f57-4e24-9d48-b2d22a39d128-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"44cb8367-2f57-4e24-9d48-b2d22a39d128\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:31:13 crc kubenswrapper[4811]: I0219 10:31:13.080472 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/44cb8367-2f57-4e24-9d48-b2d22a39d128-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"44cb8367-2f57-4e24-9d48-b2d22a39d128\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:31:13 crc kubenswrapper[4811]: I0219 10:31:13.080504 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qft4t\" (UniqueName: \"kubernetes.io/projected/44cb8367-2f57-4e24-9d48-b2d22a39d128-kube-api-access-qft4t\") pod \"rabbitmq-cell1-server-0\" (UID: \"44cb8367-2f57-4e24-9d48-b2d22a39d128\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:31:13 crc kubenswrapper[4811]: I0219 10:31:13.080539 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/44cb8367-2f57-4e24-9d48-b2d22a39d128-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"44cb8367-2f57-4e24-9d48-b2d22a39d128\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:31:13 crc kubenswrapper[4811]: I0219 10:31:13.080583 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"44cb8367-2f57-4e24-9d48-b2d22a39d128\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:31:13 crc kubenswrapper[4811]: I0219 10:31:13.080619 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/44cb8367-2f57-4e24-9d48-b2d22a39d128-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"44cb8367-2f57-4e24-9d48-b2d22a39d128\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:31:13 crc kubenswrapper[4811]: I0219 10:31:13.080658 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/44cb8367-2f57-4e24-9d48-b2d22a39d128-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"44cb8367-2f57-4e24-9d48-b2d22a39d128\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:31:13 crc kubenswrapper[4811]: I0219 10:31:13.080685 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/44cb8367-2f57-4e24-9d48-b2d22a39d128-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"44cb8367-2f57-4e24-9d48-b2d22a39d128\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:31:13 crc kubenswrapper[4811]: I0219 10:31:13.082251 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/44cb8367-2f57-4e24-9d48-b2d22a39d128-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"44cb8367-2f57-4e24-9d48-b2d22a39d128\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:31:13 crc kubenswrapper[4811]: I0219 10:31:13.082585 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/44cb8367-2f57-4e24-9d48-b2d22a39d128-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"44cb8367-2f57-4e24-9d48-b2d22a39d128\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:31:13 crc kubenswrapper[4811]: I0219 10:31:13.082936 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/44cb8367-2f57-4e24-9d48-b2d22a39d128-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"44cb8367-2f57-4e24-9d48-b2d22a39d128\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:31:13 crc kubenswrapper[4811]: I0219 10:31:13.083251 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/44cb8367-2f57-4e24-9d48-b2d22a39d128-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"44cb8367-2f57-4e24-9d48-b2d22a39d128\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:31:13 crc kubenswrapper[4811]: I0219 10:31:13.083588 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"44cb8367-2f57-4e24-9d48-b2d22a39d128\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:31:13 crc kubenswrapper[4811]: I0219 10:31:13.084328 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/44cb8367-2f57-4e24-9d48-b2d22a39d128-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"44cb8367-2f57-4e24-9d48-b2d22a39d128\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:31:13 crc kubenswrapper[4811]: I0219 10:31:13.088034 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/44cb8367-2f57-4e24-9d48-b2d22a39d128-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"44cb8367-2f57-4e24-9d48-b2d22a39d128\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:31:13 crc kubenswrapper[4811]: I0219 10:31:13.088475 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/44cb8367-2f57-4e24-9d48-b2d22a39d128-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"44cb8367-2f57-4e24-9d48-b2d22a39d128\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:31:13 crc kubenswrapper[4811]: I0219 10:31:13.094960 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/44cb8367-2f57-4e24-9d48-b2d22a39d128-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"44cb8367-2f57-4e24-9d48-b2d22a39d128\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:31:13 crc kubenswrapper[4811]: I0219 10:31:13.095257 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/44cb8367-2f57-4e24-9d48-b2d22a39d128-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"44cb8367-2f57-4e24-9d48-b2d22a39d128\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:31:13 crc kubenswrapper[4811]: I0219 10:31:13.108983 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qft4t\" (UniqueName: \"kubernetes.io/projected/44cb8367-2f57-4e24-9d48-b2d22a39d128-kube-api-access-qft4t\") pod \"rabbitmq-cell1-server-0\" (UID: \"44cb8367-2f57-4e24-9d48-b2d22a39d128\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:31:13 crc kubenswrapper[4811]: I0219 10:31:13.132170 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"44cb8367-2f57-4e24-9d48-b2d22a39d128\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:31:13 crc kubenswrapper[4811]: I0219 10:31:13.277323 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:31:13 crc kubenswrapper[4811]: I0219 10:31:13.720681 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-7z2rt" event={"ID":"6fa09a94-1cae-4e90-ade7-1c3ec509d231","Type":"ContainerStarted","Data":"23357173b31417876e5ec7e3771bfd59911038ce7af3adecbb4b44cec0e8adf3"} Feb 19 10:31:13 crc kubenswrapper[4811]: I0219 10:31:13.721667 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-7z2rt" Feb 19 10:31:13 crc kubenswrapper[4811]: I0219 10:31:13.723255 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c8683fae-0b62-402d-bec3-5c3be97c8c98","Type":"ContainerStarted","Data":"0c193b5e9a6870cc45990848ab2707a0f93f7e60a8df47953190693dc5db24fc"} Feb 19 10:31:13 crc kubenswrapper[4811]: I0219 10:31:13.749667 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-7z2rt" podStartSLOduration=2.74964484 podStartE2EDuration="2.74964484s" podCreationTimestamp="2026-02-19 10:31:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:31:13.739983693 +0000 UTC m=+1362.266448379" watchObservedRunningTime="2026-02-19 10:31:13.74964484 +0000 UTC m=+1362.276109516" Feb 19 10:31:13 crc kubenswrapper[4811]: I0219 10:31:13.760724 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 10:31:13 crc kubenswrapper[4811]: W0219 10:31:13.834137 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44cb8367_2f57_4e24_9d48_b2d22a39d128.slice/crio-240a5e8530a5f3577df617461390efbf90cd2f91f19c0ee0b33f067a916fea1f WatchSource:0}: Error finding container 240a5e8530a5f3577df617461390efbf90cd2f91f19c0ee0b33f067a916fea1f: Status 404 returned error can't find the container with id 240a5e8530a5f3577df617461390efbf90cd2f91f19c0ee0b33f067a916fea1f Feb 19 10:31:14 crc kubenswrapper[4811]: I0219 10:31:14.596047 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7378877-24bd-40fc-89c8-6bc6115040a2" path="/var/lib/kubelet/pods/d7378877-24bd-40fc-89c8-6bc6115040a2/volumes" Feb 19 10:31:14 crc kubenswrapper[4811]: I0219 10:31:14.733779 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"44cb8367-2f57-4e24-9d48-b2d22a39d128","Type":"ContainerStarted","Data":"240a5e8530a5f3577df617461390efbf90cd2f91f19c0ee0b33f067a916fea1f"} Feb 19 10:31:14 crc kubenswrapper[4811]: I0219 10:31:14.735533 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c8683fae-0b62-402d-bec3-5c3be97c8c98","Type":"ContainerStarted","Data":"8af34d05ba1abdf76a6c258e1176011e28dfc825ed3e3978b54b087406ab35cc"} Feb 19 10:31:16 crc kubenswrapper[4811]: I0219 10:31:16.754430 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"44cb8367-2f57-4e24-9d48-b2d22a39d128","Type":"ContainerStarted","Data":"e0f96f4c4fefa4cfebac053bca5511fa4e72ef342514f16254f1a9bc584f945b"} Feb 19 10:31:21 crc kubenswrapper[4811]: I0219 10:31:21.626672 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-7z2rt" Feb 19 10:31:21 crc kubenswrapper[4811]: I0219 10:31:21.718017 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-rfmsw"] Feb 19 10:31:21 crc kubenswrapper[4811]: I0219 10:31:21.718310 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-rfmsw" podUID="9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca" containerName="dnsmasq-dns" containerID="cri-o://ed6eba28b1bf086564ff85722025514cf16eabe98dade58110b9cdad6bdb7467" gracePeriod=10 Feb 19 10:31:21 crc kubenswrapper[4811]: I0219 10:31:21.907553 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55478c4467-xm8bw"] Feb 19 10:31:21 crc kubenswrapper[4811]: I0219 10:31:21.909704 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-xm8bw" Feb 19 10:31:21 crc kubenswrapper[4811]: I0219 10:31:21.938618 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-xm8bw"] Feb 19 10:31:21 crc kubenswrapper[4811]: I0219 10:31:21.982876 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/deca690b-51f2-4ee0-adbb-d7dfaa165fcc-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-xm8bw\" (UID: \"deca690b-51f2-4ee0-adbb-d7dfaa165fcc\") " pod="openstack/dnsmasq-dns-55478c4467-xm8bw" Feb 19 10:31:21 crc kubenswrapper[4811]: I0219 10:31:21.982938 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deca690b-51f2-4ee0-adbb-d7dfaa165fcc-dns-svc\") pod \"dnsmasq-dns-55478c4467-xm8bw\" (UID: \"deca690b-51f2-4ee0-adbb-d7dfaa165fcc\") " pod="openstack/dnsmasq-dns-55478c4467-xm8bw" Feb 19 10:31:21 crc kubenswrapper[4811]: I0219 10:31:21.982967 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deca690b-51f2-4ee0-adbb-d7dfaa165fcc-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-xm8bw\" (UID: \"deca690b-51f2-4ee0-adbb-d7dfaa165fcc\") " pod="openstack/dnsmasq-dns-55478c4467-xm8bw" Feb 19 10:31:21 crc kubenswrapper[4811]: I0219 10:31:21.983244 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pljxf\" (UniqueName: \"kubernetes.io/projected/deca690b-51f2-4ee0-adbb-d7dfaa165fcc-kube-api-access-pljxf\") pod \"dnsmasq-dns-55478c4467-xm8bw\" (UID: \"deca690b-51f2-4ee0-adbb-d7dfaa165fcc\") " pod="openstack/dnsmasq-dns-55478c4467-xm8bw" Feb 19 10:31:21 crc kubenswrapper[4811]: I0219 10:31:21.983293 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/deca690b-51f2-4ee0-adbb-d7dfaa165fcc-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-xm8bw\" (UID: \"deca690b-51f2-4ee0-adbb-d7dfaa165fcc\") " pod="openstack/dnsmasq-dns-55478c4467-xm8bw" Feb 19 10:31:21 crc kubenswrapper[4811]: I0219 10:31:21.983463 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/deca690b-51f2-4ee0-adbb-d7dfaa165fcc-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-xm8bw\" (UID: \"deca690b-51f2-4ee0-adbb-d7dfaa165fcc\") " pod="openstack/dnsmasq-dns-55478c4467-xm8bw" Feb 19 10:31:21 crc kubenswrapper[4811]: I0219 10:31:21.983669 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deca690b-51f2-4ee0-adbb-d7dfaa165fcc-config\") pod \"dnsmasq-dns-55478c4467-xm8bw\" (UID: \"deca690b-51f2-4ee0-adbb-d7dfaa165fcc\") " pod="openstack/dnsmasq-dns-55478c4467-xm8bw" Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.087715 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/deca690b-51f2-4ee0-adbb-d7dfaa165fcc-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-xm8bw\" (UID: \"deca690b-51f2-4ee0-adbb-d7dfaa165fcc\") " pod="openstack/dnsmasq-dns-55478c4467-xm8bw" Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.087785 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deca690b-51f2-4ee0-adbb-d7dfaa165fcc-dns-svc\") pod \"dnsmasq-dns-55478c4467-xm8bw\" (UID: \"deca690b-51f2-4ee0-adbb-d7dfaa165fcc\") " pod="openstack/dnsmasq-dns-55478c4467-xm8bw" Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.087811 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deca690b-51f2-4ee0-adbb-d7dfaa165fcc-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-xm8bw\" (UID: \"deca690b-51f2-4ee0-adbb-d7dfaa165fcc\") " pod="openstack/dnsmasq-dns-55478c4467-xm8bw" Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.087849 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pljxf\" (UniqueName: \"kubernetes.io/projected/deca690b-51f2-4ee0-adbb-d7dfaa165fcc-kube-api-access-pljxf\") pod \"dnsmasq-dns-55478c4467-xm8bw\" (UID: \"deca690b-51f2-4ee0-adbb-d7dfaa165fcc\") " pod="openstack/dnsmasq-dns-55478c4467-xm8bw" Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.087866 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/deca690b-51f2-4ee0-adbb-d7dfaa165fcc-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-xm8bw\" (UID: \"deca690b-51f2-4ee0-adbb-d7dfaa165fcc\") " pod="openstack/dnsmasq-dns-55478c4467-xm8bw" Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.087904 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/deca690b-51f2-4ee0-adbb-d7dfaa165fcc-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-xm8bw\" (UID: \"deca690b-51f2-4ee0-adbb-d7dfaa165fcc\") " pod="openstack/dnsmasq-dns-55478c4467-xm8bw" Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.087935 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deca690b-51f2-4ee0-adbb-d7dfaa165fcc-config\") pod \"dnsmasq-dns-55478c4467-xm8bw\" (UID: \"deca690b-51f2-4ee0-adbb-d7dfaa165fcc\") " pod="openstack/dnsmasq-dns-55478c4467-xm8bw" Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.088918 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deca690b-51f2-4ee0-adbb-d7dfaa165fcc-config\") pod \"dnsmasq-dns-55478c4467-xm8bw\" (UID: \"deca690b-51f2-4ee0-adbb-d7dfaa165fcc\") " pod="openstack/dnsmasq-dns-55478c4467-xm8bw" Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.089470 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/deca690b-51f2-4ee0-adbb-d7dfaa165fcc-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-xm8bw\" (UID: \"deca690b-51f2-4ee0-adbb-d7dfaa165fcc\") " pod="openstack/dnsmasq-dns-55478c4467-xm8bw" Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.089982 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deca690b-51f2-4ee0-adbb-d7dfaa165fcc-dns-svc\") pod \"dnsmasq-dns-55478c4467-xm8bw\" (UID: \"deca690b-51f2-4ee0-adbb-d7dfaa165fcc\") " pod="openstack/dnsmasq-dns-55478c4467-xm8bw" Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.090535 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deca690b-51f2-4ee0-adbb-d7dfaa165fcc-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-xm8bw\" (UID: \"deca690b-51f2-4ee0-adbb-d7dfaa165fcc\") " pod="openstack/dnsmasq-dns-55478c4467-xm8bw" Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.091352 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/deca690b-51f2-4ee0-adbb-d7dfaa165fcc-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-xm8bw\" (UID: \"deca690b-51f2-4ee0-adbb-d7dfaa165fcc\") " pod="openstack/dnsmasq-dns-55478c4467-xm8bw" Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.091935 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/deca690b-51f2-4ee0-adbb-d7dfaa165fcc-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-xm8bw\" (UID: \"deca690b-51f2-4ee0-adbb-d7dfaa165fcc\") " pod="openstack/dnsmasq-dns-55478c4467-xm8bw" Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.134690 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pljxf\" (UniqueName: \"kubernetes.io/projected/deca690b-51f2-4ee0-adbb-d7dfaa165fcc-kube-api-access-pljxf\") pod \"dnsmasq-dns-55478c4467-xm8bw\" (UID: \"deca690b-51f2-4ee0-adbb-d7dfaa165fcc\") " pod="openstack/dnsmasq-dns-55478c4467-xm8bw" Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.241017 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-xm8bw" Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.386507 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-rfmsw" Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.504170 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca-config\") pod \"9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca\" (UID: \"9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca\") " Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.504230 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqjzb\" (UniqueName: \"kubernetes.io/projected/9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca-kube-api-access-hqjzb\") pod \"9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca\" (UID: \"9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca\") " Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.504299 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca-ovsdbserver-nb\") pod \"9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca\" (UID: \"9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca\") " Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.504330 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca-dns-swift-storage-0\") pod \"9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca\" (UID: \"9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca\") " Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.504356 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca-dns-svc\") pod \"9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca\" (UID: \"9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca\") " Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.504562 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca-ovsdbserver-sb\") pod \"9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca\" (UID: \"9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca\") " Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.512553 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca-kube-api-access-hqjzb" (OuterVolumeSpecName: "kube-api-access-hqjzb") pod "9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca" (UID: "9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca"). InnerVolumeSpecName "kube-api-access-hqjzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.565958 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca" (UID: "9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.569138 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca" (UID: "9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.582249 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca" (UID: "9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.587427 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca-config" (OuterVolumeSpecName: "config") pod "9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca" (UID: "9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.595526 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca" (UID: "9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.606854 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.606913 4811 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.606937 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.606951 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.606962 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.606975 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqjzb\" (UniqueName: \"kubernetes.io/projected/9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca-kube-api-access-hqjzb\") on node \"crc\" DevicePath \"\"" Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.774402 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-xm8bw"] Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.809279 4811 generic.go:334] "Generic (PLEG): container finished" podID="9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca" containerID="ed6eba28b1bf086564ff85722025514cf16eabe98dade58110b9cdad6bdb7467" exitCode=0 Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.809999 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-rfmsw" Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.810612 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-rfmsw" event={"ID":"9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca","Type":"ContainerDied","Data":"ed6eba28b1bf086564ff85722025514cf16eabe98dade58110b9cdad6bdb7467"} Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.810739 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-rfmsw" event={"ID":"9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca","Type":"ContainerDied","Data":"e5c033b71c1a1caea5acb82aedba4145a239a20342ef81286d166b023a8cc3a4"} Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.810821 4811 scope.go:117] "RemoveContainer" containerID="ed6eba28b1bf086564ff85722025514cf16eabe98dade58110b9cdad6bdb7467" Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.822441 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-xm8bw" event={"ID":"deca690b-51f2-4ee0-adbb-d7dfaa165fcc","Type":"ContainerStarted","Data":"17bdde82ab2dd3c49aa58686ea1fcac8310b5d398dae0498078c001ae837ede5"} Feb 19 10:31:22 crc kubenswrapper[4811]: I0219 10:31:22.978903 4811 scope.go:117] "RemoveContainer" containerID="0c2fce79cd7c6b05d66a045b2c3061f296915748d992c8ed72007fb44b5e3133" Feb 19 10:31:23 crc kubenswrapper[4811]: I0219 10:31:23.012113 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-rfmsw"] Feb 19 10:31:23 crc kubenswrapper[4811]: I0219 10:31:23.027530 4811 scope.go:117] "RemoveContainer" containerID="ed6eba28b1bf086564ff85722025514cf16eabe98dade58110b9cdad6bdb7467" Feb 19 10:31:23 crc kubenswrapper[4811]: E0219 10:31:23.028148 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed6eba28b1bf086564ff85722025514cf16eabe98dade58110b9cdad6bdb7467\": container with ID starting with ed6eba28b1bf086564ff85722025514cf16eabe98dade58110b9cdad6bdb7467 not found: ID does not exist" containerID="ed6eba28b1bf086564ff85722025514cf16eabe98dade58110b9cdad6bdb7467" Feb 19 10:31:23 crc kubenswrapper[4811]: I0219 10:31:23.028239 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed6eba28b1bf086564ff85722025514cf16eabe98dade58110b9cdad6bdb7467"} err="failed to get container status \"ed6eba28b1bf086564ff85722025514cf16eabe98dade58110b9cdad6bdb7467\": rpc error: code = NotFound desc = could not find container \"ed6eba28b1bf086564ff85722025514cf16eabe98dade58110b9cdad6bdb7467\": container with ID starting with ed6eba28b1bf086564ff85722025514cf16eabe98dade58110b9cdad6bdb7467 not found: ID does not exist" Feb 19 10:31:23 crc kubenswrapper[4811]: I0219 10:31:23.028284 4811 scope.go:117] "RemoveContainer" containerID="0c2fce79cd7c6b05d66a045b2c3061f296915748d992c8ed72007fb44b5e3133" Feb 19 10:31:23 crc kubenswrapper[4811]: E0219 10:31:23.028900 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c2fce79cd7c6b05d66a045b2c3061f296915748d992c8ed72007fb44b5e3133\": container with ID starting with 0c2fce79cd7c6b05d66a045b2c3061f296915748d992c8ed72007fb44b5e3133 not found: ID does not exist" containerID="0c2fce79cd7c6b05d66a045b2c3061f296915748d992c8ed72007fb44b5e3133" Feb 19 10:31:23 crc kubenswrapper[4811]: I0219 10:31:23.028967 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c2fce79cd7c6b05d66a045b2c3061f296915748d992c8ed72007fb44b5e3133"} err="failed to get container status \"0c2fce79cd7c6b05d66a045b2c3061f296915748d992c8ed72007fb44b5e3133\": rpc error: code = NotFound desc = could not find container \"0c2fce79cd7c6b05d66a045b2c3061f296915748d992c8ed72007fb44b5e3133\": container with ID starting with 0c2fce79cd7c6b05d66a045b2c3061f296915748d992c8ed72007fb44b5e3133 not found: ID does not exist" Feb 19 10:31:23 crc kubenswrapper[4811]: I0219 10:31:23.029621 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-rfmsw"] Feb 19 10:31:23 crc kubenswrapper[4811]: I0219 10:31:23.836585 4811 generic.go:334] "Generic (PLEG): container finished" podID="deca690b-51f2-4ee0-adbb-d7dfaa165fcc" containerID="00873cd7e5414dc3dd3522161285a90ea46c0b1b4c2a3daa8de12e4a44b993ab" exitCode=0 Feb 19 10:31:23 crc kubenswrapper[4811]: I0219 10:31:23.836993 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-xm8bw" event={"ID":"deca690b-51f2-4ee0-adbb-d7dfaa165fcc","Type":"ContainerDied","Data":"00873cd7e5414dc3dd3522161285a90ea46c0b1b4c2a3daa8de12e4a44b993ab"} Feb 19 10:31:24 crc kubenswrapper[4811]: I0219 10:31:24.595402 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca" path="/var/lib/kubelet/pods/9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca/volumes" Feb 19 10:31:24 crc kubenswrapper[4811]: I0219 10:31:24.857882 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-xm8bw" event={"ID":"deca690b-51f2-4ee0-adbb-d7dfaa165fcc","Type":"ContainerStarted","Data":"040d3d9a738a4341b950c11428df8c527cc63234fdcedcec59ad9cb22061dd6a"} Feb 19 10:31:24 crc kubenswrapper[4811]: I0219 10:31:24.858156 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55478c4467-xm8bw" Feb 19 10:31:24 crc kubenswrapper[4811]: I0219 10:31:24.893877 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55478c4467-xm8bw" podStartSLOduration=3.893849968 podStartE2EDuration="3.893849968s" podCreationTimestamp="2026-02-19 10:31:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:31:24.885641639 +0000 UTC m=+1373.412106345" watchObservedRunningTime="2026-02-19 10:31:24.893849968 +0000 UTC m=+1373.420314654" Feb 19 10:31:32 crc kubenswrapper[4811]: I0219 10:31:32.242655 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55478c4467-xm8bw" Feb 19 10:31:32 crc kubenswrapper[4811]: I0219 10:31:32.314756 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-7z2rt"] Feb 19 10:31:32 crc kubenswrapper[4811]: I0219 10:31:32.315156 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-7z2rt" podUID="6fa09a94-1cae-4e90-ade7-1c3ec509d231" containerName="dnsmasq-dns" containerID="cri-o://23357173b31417876e5ec7e3771bfd59911038ce7af3adecbb4b44cec0e8adf3" gracePeriod=10 Feb 19 10:31:32 crc kubenswrapper[4811]: I0219 10:31:32.937383 4811 generic.go:334] "Generic (PLEG): container finished" podID="6fa09a94-1cae-4e90-ade7-1c3ec509d231" containerID="23357173b31417876e5ec7e3771bfd59911038ce7af3adecbb4b44cec0e8adf3" exitCode=0 Feb 19 10:31:32 crc kubenswrapper[4811]: I0219 10:31:32.937470 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-7z2rt" event={"ID":"6fa09a94-1cae-4e90-ade7-1c3ec509d231","Type":"ContainerDied","Data":"23357173b31417876e5ec7e3771bfd59911038ce7af3adecbb4b44cec0e8adf3"} Feb 19 10:31:33 crc kubenswrapper[4811]: I0219 10:31:33.386723 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-7z2rt" Feb 19 10:31:33 crc kubenswrapper[4811]: I0219 10:31:33.474746 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6fa09a94-1cae-4e90-ade7-1c3ec509d231-dns-swift-storage-0\") pod \"6fa09a94-1cae-4e90-ade7-1c3ec509d231\" (UID: \"6fa09a94-1cae-4e90-ade7-1c3ec509d231\") " Feb 19 10:31:33 crc kubenswrapper[4811]: I0219 10:31:33.474842 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6fa09a94-1cae-4e90-ade7-1c3ec509d231-ovsdbserver-sb\") pod \"6fa09a94-1cae-4e90-ade7-1c3ec509d231\" (UID: \"6fa09a94-1cae-4e90-ade7-1c3ec509d231\") " Feb 19 10:31:33 crc kubenswrapper[4811]: I0219 10:31:33.474904 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fa09a94-1cae-4e90-ade7-1c3ec509d231-ovsdbserver-nb\") pod \"6fa09a94-1cae-4e90-ade7-1c3ec509d231\" (UID: \"6fa09a94-1cae-4e90-ade7-1c3ec509d231\") " Feb 19 10:31:33 crc kubenswrapper[4811]: I0219 10:31:33.474948 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fa09a94-1cae-4e90-ade7-1c3ec509d231-config\") pod \"6fa09a94-1cae-4e90-ade7-1c3ec509d231\" (UID: \"6fa09a94-1cae-4e90-ade7-1c3ec509d231\") " Feb 19 10:31:33 crc kubenswrapper[4811]: I0219 10:31:33.474984 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fa09a94-1cae-4e90-ade7-1c3ec509d231-dns-svc\") pod \"6fa09a94-1cae-4e90-ade7-1c3ec509d231\" (UID: \"6fa09a94-1cae-4e90-ade7-1c3ec509d231\") " Feb 19 10:31:33 crc kubenswrapper[4811]: I0219 10:31:33.475012 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-px94f\" (UniqueName: \"kubernetes.io/projected/6fa09a94-1cae-4e90-ade7-1c3ec509d231-kube-api-access-px94f\") pod \"6fa09a94-1cae-4e90-ade7-1c3ec509d231\" (UID: \"6fa09a94-1cae-4e90-ade7-1c3ec509d231\") " Feb 19 10:31:33 crc kubenswrapper[4811]: I0219 10:31:33.475231 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/6fa09a94-1cae-4e90-ade7-1c3ec509d231-openstack-edpm-ipam\") pod \"6fa09a94-1cae-4e90-ade7-1c3ec509d231\" (UID: \"6fa09a94-1cae-4e90-ade7-1c3ec509d231\") " Feb 19 10:31:33 crc kubenswrapper[4811]: I0219 10:31:33.495568 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fa09a94-1cae-4e90-ade7-1c3ec509d231-kube-api-access-px94f" (OuterVolumeSpecName: "kube-api-access-px94f") pod "6fa09a94-1cae-4e90-ade7-1c3ec509d231" (UID: "6fa09a94-1cae-4e90-ade7-1c3ec509d231"). InnerVolumeSpecName "kube-api-access-px94f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:31:33 crc kubenswrapper[4811]: I0219 10:31:33.548075 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fa09a94-1cae-4e90-ade7-1c3ec509d231-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6fa09a94-1cae-4e90-ade7-1c3ec509d231" (UID: "6fa09a94-1cae-4e90-ade7-1c3ec509d231"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:31:33 crc kubenswrapper[4811]: I0219 10:31:33.552101 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fa09a94-1cae-4e90-ade7-1c3ec509d231-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6fa09a94-1cae-4e90-ade7-1c3ec509d231" (UID: "6fa09a94-1cae-4e90-ade7-1c3ec509d231"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:31:33 crc kubenswrapper[4811]: I0219 10:31:33.558129 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fa09a94-1cae-4e90-ade7-1c3ec509d231-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6fa09a94-1cae-4e90-ade7-1c3ec509d231" (UID: "6fa09a94-1cae-4e90-ade7-1c3ec509d231"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:31:33 crc kubenswrapper[4811]: I0219 10:31:33.561837 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fa09a94-1cae-4e90-ade7-1c3ec509d231-config" (OuterVolumeSpecName: "config") pod "6fa09a94-1cae-4e90-ade7-1c3ec509d231" (UID: "6fa09a94-1cae-4e90-ade7-1c3ec509d231"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:31:33 crc kubenswrapper[4811]: I0219 10:31:33.563321 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fa09a94-1cae-4e90-ade7-1c3ec509d231-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "6fa09a94-1cae-4e90-ade7-1c3ec509d231" (UID: "6fa09a94-1cae-4e90-ade7-1c3ec509d231"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:31:33 crc kubenswrapper[4811]: I0219 10:31:33.573299 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fa09a94-1cae-4e90-ade7-1c3ec509d231-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6fa09a94-1cae-4e90-ade7-1c3ec509d231" (UID: "6fa09a94-1cae-4e90-ade7-1c3ec509d231"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:31:33 crc kubenswrapper[4811]: I0219 10:31:33.578710 4811 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6fa09a94-1cae-4e90-ade7-1c3ec509d231-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:31:33 crc kubenswrapper[4811]: I0219 10:31:33.578748 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6fa09a94-1cae-4e90-ade7-1c3ec509d231-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 10:31:33 crc kubenswrapper[4811]: I0219 10:31:33.578761 4811 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fa09a94-1cae-4e90-ade7-1c3ec509d231-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 10:31:33 crc kubenswrapper[4811]: I0219 10:31:33.578771 4811 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fa09a94-1cae-4e90-ade7-1c3ec509d231-config\") on node \"crc\" DevicePath \"\"" Feb 19 10:31:33 crc kubenswrapper[4811]: I0219 10:31:33.578780 4811 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fa09a94-1cae-4e90-ade7-1c3ec509d231-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 10:31:33 crc kubenswrapper[4811]: I0219 10:31:33.578789 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-px94f\" (UniqueName: \"kubernetes.io/projected/6fa09a94-1cae-4e90-ade7-1c3ec509d231-kube-api-access-px94f\") on node \"crc\" DevicePath \"\"" Feb 19 10:31:33 crc kubenswrapper[4811]: I0219 10:31:33.578799 4811 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/6fa09a94-1cae-4e90-ade7-1c3ec509d231-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:31:33 crc kubenswrapper[4811]: I0219 10:31:33.948417 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-7z2rt" event={"ID":"6fa09a94-1cae-4e90-ade7-1c3ec509d231","Type":"ContainerDied","Data":"0daf82b04d9e43062de26389f459975fad58140ea95602d5f0e87ef06303c1f9"} Feb 19 10:31:33 crc kubenswrapper[4811]: I0219 10:31:33.948525 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-7z2rt" Feb 19 10:31:33 crc kubenswrapper[4811]: I0219 10:31:33.948772 4811 scope.go:117] "RemoveContainer" containerID="23357173b31417876e5ec7e3771bfd59911038ce7af3adecbb4b44cec0e8adf3" Feb 19 10:31:33 crc kubenswrapper[4811]: I0219 10:31:33.975980 4811 scope.go:117] "RemoveContainer" containerID="ad8bb6eb5b70f3f3630faf5d1bb5a6a5cba6036a1d933233ce34c7cf7ce374a9" Feb 19 10:31:33 crc kubenswrapper[4811]: I0219 10:31:33.990412 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-7z2rt"] Feb 19 10:31:34 crc kubenswrapper[4811]: I0219 10:31:34.001433 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-7z2rt"] Feb 19 10:31:34 crc kubenswrapper[4811]: I0219 10:31:34.592426 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fa09a94-1cae-4e90-ade7-1c3ec509d231" path="/var/lib/kubelet/pods/6fa09a94-1cae-4e90-ade7-1c3ec509d231/volumes" Feb 19 10:31:45 crc kubenswrapper[4811]: I0219 10:31:45.362080 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5sclt"] Feb 19 10:31:45 crc kubenswrapper[4811]: E0219 10:31:45.363125 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca" containerName="dnsmasq-dns" Feb 19 10:31:45 crc kubenswrapper[4811]: I0219 10:31:45.363143 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca" containerName="dnsmasq-dns" Feb 19 10:31:45 crc kubenswrapper[4811]: E0219 10:31:45.363164 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fa09a94-1cae-4e90-ade7-1c3ec509d231" containerName="init" Feb 19 10:31:45 crc kubenswrapper[4811]: I0219 10:31:45.363172 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fa09a94-1cae-4e90-ade7-1c3ec509d231" containerName="init" Feb 19 10:31:45 crc kubenswrapper[4811]: E0219 10:31:45.363196 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fa09a94-1cae-4e90-ade7-1c3ec509d231" containerName="dnsmasq-dns" Feb 19 10:31:45 crc kubenswrapper[4811]: I0219 10:31:45.363205 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fa09a94-1cae-4e90-ade7-1c3ec509d231" containerName="dnsmasq-dns" Feb 19 10:31:45 crc kubenswrapper[4811]: E0219 10:31:45.363224 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca" containerName="init" Feb 19 10:31:45 crc kubenswrapper[4811]: I0219 10:31:45.363231 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca" containerName="init" Feb 19 10:31:45 crc kubenswrapper[4811]: I0219 10:31:45.363444 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fa09a94-1cae-4e90-ade7-1c3ec509d231" containerName="dnsmasq-dns" Feb 19 10:31:45 crc kubenswrapper[4811]: I0219 10:31:45.363481 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="9431cbdf-dd0d-4212-bc6e-b1767b6ca3ca" containerName="dnsmasq-dns" Feb 19 10:31:45 crc kubenswrapper[4811]: I0219 10:31:45.364284 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5sclt" Feb 19 10:31:45 crc kubenswrapper[4811]: I0219 10:31:45.369264 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wtknk" Feb 19 10:31:45 crc kubenswrapper[4811]: I0219 10:31:45.369266 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:31:45 crc kubenswrapper[4811]: I0219 10:31:45.369703 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:31:45 crc kubenswrapper[4811]: I0219 10:31:45.370069 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:31:45 crc kubenswrapper[4811]: I0219 10:31:45.378207 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5sclt"] Feb 19 10:31:45 crc kubenswrapper[4811]: I0219 10:31:45.541843 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8jdg\" (UniqueName: \"kubernetes.io/projected/5b78bd06-9aa5-4755-be4c-ee3d85655501-kube-api-access-l8jdg\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5sclt\" (UID: \"5b78bd06-9aa5-4755-be4c-ee3d85655501\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5sclt" Feb 19 10:31:45 crc kubenswrapper[4811]: I0219 10:31:45.541927 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5b78bd06-9aa5-4755-be4c-ee3d85655501-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5sclt\" (UID: \"5b78bd06-9aa5-4755-be4c-ee3d85655501\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5sclt" Feb 19 10:31:45 crc kubenswrapper[4811]: I0219 10:31:45.542019 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5b78bd06-9aa5-4755-be4c-ee3d85655501-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5sclt\" (UID: \"5b78bd06-9aa5-4755-be4c-ee3d85655501\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5sclt" Feb 19 10:31:45 crc kubenswrapper[4811]: I0219 10:31:45.542058 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b78bd06-9aa5-4755-be4c-ee3d85655501-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5sclt\" (UID: \"5b78bd06-9aa5-4755-be4c-ee3d85655501\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5sclt" Feb 19 10:31:45 crc kubenswrapper[4811]: I0219 10:31:45.643993 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5b78bd06-9aa5-4755-be4c-ee3d85655501-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5sclt\" (UID: \"5b78bd06-9aa5-4755-be4c-ee3d85655501\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5sclt" Feb 19 10:31:45 crc kubenswrapper[4811]: I0219 10:31:45.644081 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b78bd06-9aa5-4755-be4c-ee3d85655501-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5sclt\" (UID: \"5b78bd06-9aa5-4755-be4c-ee3d85655501\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5sclt" Feb 19 10:31:45 crc kubenswrapper[4811]: I0219 10:31:45.644217 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8jdg\" (UniqueName: \"kubernetes.io/projected/5b78bd06-9aa5-4755-be4c-ee3d85655501-kube-api-access-l8jdg\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5sclt\" (UID: \"5b78bd06-9aa5-4755-be4c-ee3d85655501\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5sclt" Feb 19 10:31:45 crc kubenswrapper[4811]: I0219 10:31:45.644262 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5b78bd06-9aa5-4755-be4c-ee3d85655501-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5sclt\" (UID: \"5b78bd06-9aa5-4755-be4c-ee3d85655501\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5sclt" Feb 19 10:31:45 crc kubenswrapper[4811]: I0219 10:31:45.652198 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5b78bd06-9aa5-4755-be4c-ee3d85655501-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5sclt\" (UID: \"5b78bd06-9aa5-4755-be4c-ee3d85655501\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5sclt" Feb 19 10:31:45 crc kubenswrapper[4811]: I0219 10:31:45.652557 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b78bd06-9aa5-4755-be4c-ee3d85655501-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5sclt\" (UID: \"5b78bd06-9aa5-4755-be4c-ee3d85655501\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5sclt" Feb 19 10:31:45 crc kubenswrapper[4811]: I0219 10:31:45.653570 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5b78bd06-9aa5-4755-be4c-ee3d85655501-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5sclt\" (UID: \"5b78bd06-9aa5-4755-be4c-ee3d85655501\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5sclt" Feb 19 10:31:45 crc kubenswrapper[4811]: I0219 10:31:45.666111 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8jdg\" (UniqueName: \"kubernetes.io/projected/5b78bd06-9aa5-4755-be4c-ee3d85655501-kube-api-access-l8jdg\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5sclt\" (UID: \"5b78bd06-9aa5-4755-be4c-ee3d85655501\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5sclt" Feb 19 10:31:45 crc kubenswrapper[4811]: I0219 10:31:45.683032 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5sclt" Feb 19 10:31:46 crc kubenswrapper[4811]: I0219 10:31:46.091966 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5sclt"] Feb 19 10:31:46 crc kubenswrapper[4811]: E0219 10:31:46.826399 4811 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8683fae_0b62_402d_bec3_5c3be97c8c98.slice/crio-8af34d05ba1abdf76a6c258e1176011e28dfc825ed3e3978b54b087406ab35cc.scope\": RecentStats: unable to find data in memory cache]" Feb 19 10:31:47 crc kubenswrapper[4811]: I0219 10:31:47.072992 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5sclt" event={"ID":"5b78bd06-9aa5-4755-be4c-ee3d85655501","Type":"ContainerStarted","Data":"3eb65112541a5f68721f1b6b15c8d083878012c1522d387211086d1cb0c583ee"} Feb 19 10:31:47 crc kubenswrapper[4811]: I0219 10:31:47.075734 4811 generic.go:334] "Generic (PLEG): container finished" podID="c8683fae-0b62-402d-bec3-5c3be97c8c98" containerID="8af34d05ba1abdf76a6c258e1176011e28dfc825ed3e3978b54b087406ab35cc" exitCode=0 Feb 19 10:31:47 crc kubenswrapper[4811]: I0219 10:31:47.075777 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c8683fae-0b62-402d-bec3-5c3be97c8c98","Type":"ContainerDied","Data":"8af34d05ba1abdf76a6c258e1176011e28dfc825ed3e3978b54b087406ab35cc"} Feb 19 10:31:48 crc kubenswrapper[4811]: I0219 10:31:48.130317 4811 generic.go:334] "Generic (PLEG): container finished" podID="44cb8367-2f57-4e24-9d48-b2d22a39d128" containerID="e0f96f4c4fefa4cfebac053bca5511fa4e72ef342514f16254f1a9bc584f945b" exitCode=0 Feb 19 10:31:48 crc kubenswrapper[4811]: I0219 10:31:48.130406 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"44cb8367-2f57-4e24-9d48-b2d22a39d128","Type":"ContainerDied","Data":"e0f96f4c4fefa4cfebac053bca5511fa4e72ef342514f16254f1a9bc584f945b"} Feb 19 10:31:48 crc kubenswrapper[4811]: I0219 10:31:48.138405 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c8683fae-0b62-402d-bec3-5c3be97c8c98","Type":"ContainerStarted","Data":"28b207a2222b761828b63417f8d8ab72d865a308c25c6075a9976ee8c9f8bbe6"} Feb 19 10:31:48 crc kubenswrapper[4811]: I0219 10:31:48.138681 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 19 10:31:48 crc kubenswrapper[4811]: I0219 10:31:48.193208 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.193172809000004 podStartE2EDuration="37.193172809s" podCreationTimestamp="2026-02-19 10:31:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:31:48.181827599 +0000 UTC m=+1396.708292285" watchObservedRunningTime="2026-02-19 10:31:48.193172809 +0000 UTC m=+1396.719637495" Feb 19 10:31:56 crc kubenswrapper[4811]: I0219 10:31:56.787836 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:31:56 crc kubenswrapper[4811]: I0219 10:31:56.788838 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:31:57 crc kubenswrapper[4811]: I0219 10:31:57.234276 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5sclt" event={"ID":"5b78bd06-9aa5-4755-be4c-ee3d85655501","Type":"ContainerStarted","Data":"8970656d94aa2eb754f6862fb026cf68c9270511364ec9266a14d3a1ae5b477c"} Feb 19 10:31:57 crc kubenswrapper[4811]: I0219 10:31:57.237559 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"44cb8367-2f57-4e24-9d48-b2d22a39d128","Type":"ContainerStarted","Data":"091ff81cc1fd6134000673f703f980a89a8f9045de62ee0f2812c78150ca22e8"} Feb 19 10:31:57 crc kubenswrapper[4811]: I0219 10:31:57.237859 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:31:57 crc kubenswrapper[4811]: I0219 10:31:57.259543 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5sclt" podStartSLOduration=2.241656743 podStartE2EDuration="12.259525547s" podCreationTimestamp="2026-02-19 10:31:45 +0000 UTC" firstStartedPulling="2026-02-19 10:31:46.10825896 +0000 UTC m=+1394.634723646" lastFinishedPulling="2026-02-19 10:31:56.126127764 +0000 UTC m=+1404.652592450" observedRunningTime="2026-02-19 10:31:57.252746304 +0000 UTC m=+1405.779210990" watchObservedRunningTime="2026-02-19 10:31:57.259525547 +0000 UTC m=+1405.785990233" Feb 19 10:31:57 crc kubenswrapper[4811]: I0219 10:31:57.284182 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=45.284164466 podStartE2EDuration="45.284164466s" podCreationTimestamp="2026-02-19 10:31:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 10:31:57.277178618 +0000 UTC m=+1405.803643304" watchObservedRunningTime="2026-02-19 10:31:57.284164466 +0000 UTC m=+1405.810629142" Feb 19 10:32:02 crc kubenswrapper[4811]: I0219 10:32:02.235780 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 19 10:32:08 crc kubenswrapper[4811]: I0219 10:32:08.361854 4811 generic.go:334] "Generic (PLEG): container finished" podID="5b78bd06-9aa5-4755-be4c-ee3d85655501" containerID="8970656d94aa2eb754f6862fb026cf68c9270511364ec9266a14d3a1ae5b477c" exitCode=0 Feb 19 10:32:08 crc kubenswrapper[4811]: I0219 10:32:08.361972 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5sclt" event={"ID":"5b78bd06-9aa5-4755-be4c-ee3d85655501","Type":"ContainerDied","Data":"8970656d94aa2eb754f6862fb026cf68c9270511364ec9266a14d3a1ae5b477c"} Feb 19 10:32:09 crc kubenswrapper[4811]: I0219 10:32:09.905152 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5sclt" Feb 19 10:32:09 crc kubenswrapper[4811]: I0219 10:32:09.963042 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b78bd06-9aa5-4755-be4c-ee3d85655501-repo-setup-combined-ca-bundle\") pod \"5b78bd06-9aa5-4755-be4c-ee3d85655501\" (UID: \"5b78bd06-9aa5-4755-be4c-ee3d85655501\") " Feb 19 10:32:09 crc kubenswrapper[4811]: I0219 10:32:09.963106 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8jdg\" (UniqueName: \"kubernetes.io/projected/5b78bd06-9aa5-4755-be4c-ee3d85655501-kube-api-access-l8jdg\") pod \"5b78bd06-9aa5-4755-be4c-ee3d85655501\" (UID: \"5b78bd06-9aa5-4755-be4c-ee3d85655501\") " Feb 19 10:32:09 crc kubenswrapper[4811]: I0219 10:32:09.963727 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5b78bd06-9aa5-4755-be4c-ee3d85655501-inventory\") pod \"5b78bd06-9aa5-4755-be4c-ee3d85655501\" (UID: \"5b78bd06-9aa5-4755-be4c-ee3d85655501\") " Feb 19 10:32:09 crc kubenswrapper[4811]: I0219 10:32:09.963811 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5b78bd06-9aa5-4755-be4c-ee3d85655501-ssh-key-openstack-edpm-ipam\") pod \"5b78bd06-9aa5-4755-be4c-ee3d85655501\" (UID: \"5b78bd06-9aa5-4755-be4c-ee3d85655501\") " Feb 19 10:32:09 crc kubenswrapper[4811]: I0219 10:32:09.972795 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b78bd06-9aa5-4755-be4c-ee3d85655501-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "5b78bd06-9aa5-4755-be4c-ee3d85655501" (UID: "5b78bd06-9aa5-4755-be4c-ee3d85655501"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:32:09 crc kubenswrapper[4811]: I0219 10:32:09.972932 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b78bd06-9aa5-4755-be4c-ee3d85655501-kube-api-access-l8jdg" (OuterVolumeSpecName: "kube-api-access-l8jdg") pod "5b78bd06-9aa5-4755-be4c-ee3d85655501" (UID: "5b78bd06-9aa5-4755-be4c-ee3d85655501"). InnerVolumeSpecName "kube-api-access-l8jdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:32:09 crc kubenswrapper[4811]: I0219 10:32:09.997242 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b78bd06-9aa5-4755-be4c-ee3d85655501-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5b78bd06-9aa5-4755-be4c-ee3d85655501" (UID: "5b78bd06-9aa5-4755-be4c-ee3d85655501"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:32:10 crc kubenswrapper[4811]: I0219 10:32:10.005397 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b78bd06-9aa5-4755-be4c-ee3d85655501-inventory" (OuterVolumeSpecName: "inventory") pod "5b78bd06-9aa5-4755-be4c-ee3d85655501" (UID: "5b78bd06-9aa5-4755-be4c-ee3d85655501"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:32:10 crc kubenswrapper[4811]: I0219 10:32:10.066735 4811 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b78bd06-9aa5-4755-be4c-ee3d85655501-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:32:10 crc kubenswrapper[4811]: I0219 10:32:10.066766 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8jdg\" (UniqueName: \"kubernetes.io/projected/5b78bd06-9aa5-4755-be4c-ee3d85655501-kube-api-access-l8jdg\") on node \"crc\" DevicePath \"\"" Feb 19 10:32:10 crc kubenswrapper[4811]: I0219 10:32:10.066780 4811 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5b78bd06-9aa5-4755-be4c-ee3d85655501-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:32:10 crc kubenswrapper[4811]: I0219 10:32:10.066789 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5b78bd06-9aa5-4755-be4c-ee3d85655501-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:32:10 crc kubenswrapper[4811]: I0219 10:32:10.385617 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5sclt" event={"ID":"5b78bd06-9aa5-4755-be4c-ee3d85655501","Type":"ContainerDied","Data":"3eb65112541a5f68721f1b6b15c8d083878012c1522d387211086d1cb0c583ee"} Feb 19 10:32:10 crc kubenswrapper[4811]: I0219 10:32:10.385661 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3eb65112541a5f68721f1b6b15c8d083878012c1522d387211086d1cb0c583ee" Feb 19 10:32:10 crc kubenswrapper[4811]: I0219 10:32:10.385710 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5sclt" Feb 19 10:32:10 crc kubenswrapper[4811]: I0219 10:32:10.507290 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-bk8sg"] Feb 19 10:32:10 crc kubenswrapper[4811]: E0219 10:32:10.507977 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b78bd06-9aa5-4755-be4c-ee3d85655501" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 19 10:32:10 crc kubenswrapper[4811]: I0219 10:32:10.508007 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b78bd06-9aa5-4755-be4c-ee3d85655501" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 19 10:32:10 crc kubenswrapper[4811]: I0219 10:32:10.508331 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b78bd06-9aa5-4755-be4c-ee3d85655501" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 19 10:32:10 crc kubenswrapper[4811]: I0219 10:32:10.509532 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bk8sg" Feb 19 10:32:10 crc kubenswrapper[4811]: I0219 10:32:10.517388 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wtknk" Feb 19 10:32:10 crc kubenswrapper[4811]: I0219 10:32:10.517466 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:32:10 crc kubenswrapper[4811]: I0219 10:32:10.517624 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:32:10 crc kubenswrapper[4811]: I0219 10:32:10.517764 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:32:10 crc kubenswrapper[4811]: I0219 10:32:10.524081 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-bk8sg"] Feb 19 10:32:10 crc kubenswrapper[4811]: I0219 10:32:10.578088 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98t9k\" (UniqueName: \"kubernetes.io/projected/36d956c8-5142-4ec4-b5fb-622c05d0e097-kube-api-access-98t9k\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-bk8sg\" (UID: \"36d956c8-5142-4ec4-b5fb-622c05d0e097\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bk8sg" Feb 19 10:32:10 crc kubenswrapper[4811]: I0219 10:32:10.578168 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36d956c8-5142-4ec4-b5fb-622c05d0e097-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-bk8sg\" (UID: \"36d956c8-5142-4ec4-b5fb-622c05d0e097\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bk8sg" Feb 19 10:32:10 crc kubenswrapper[4811]: I0219 10:32:10.578814 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36d956c8-5142-4ec4-b5fb-622c05d0e097-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-bk8sg\" (UID: \"36d956c8-5142-4ec4-b5fb-622c05d0e097\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bk8sg" Feb 19 10:32:10 crc kubenswrapper[4811]: I0219 10:32:10.681380 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36d956c8-5142-4ec4-b5fb-622c05d0e097-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-bk8sg\" (UID: \"36d956c8-5142-4ec4-b5fb-622c05d0e097\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bk8sg" Feb 19 10:32:10 crc kubenswrapper[4811]: I0219 10:32:10.681611 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98t9k\" (UniqueName: \"kubernetes.io/projected/36d956c8-5142-4ec4-b5fb-622c05d0e097-kube-api-access-98t9k\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-bk8sg\" (UID: \"36d956c8-5142-4ec4-b5fb-622c05d0e097\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bk8sg" Feb 19 10:32:10 crc kubenswrapper[4811]: I0219 10:32:10.681688 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36d956c8-5142-4ec4-b5fb-622c05d0e097-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-bk8sg\" (UID: \"36d956c8-5142-4ec4-b5fb-622c05d0e097\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bk8sg" Feb 19 10:32:10 crc kubenswrapper[4811]: I0219 10:32:10.687464 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36d956c8-5142-4ec4-b5fb-622c05d0e097-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-bk8sg\" (UID: \"36d956c8-5142-4ec4-b5fb-622c05d0e097\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bk8sg" Feb 19 10:32:10 crc kubenswrapper[4811]: I0219 10:32:10.687715 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36d956c8-5142-4ec4-b5fb-622c05d0e097-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-bk8sg\" (UID: \"36d956c8-5142-4ec4-b5fb-622c05d0e097\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bk8sg" Feb 19 10:32:10 crc kubenswrapper[4811]: I0219 10:32:10.702540 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98t9k\" (UniqueName: \"kubernetes.io/projected/36d956c8-5142-4ec4-b5fb-622c05d0e097-kube-api-access-98t9k\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-bk8sg\" (UID: \"36d956c8-5142-4ec4-b5fb-622c05d0e097\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bk8sg" Feb 19 10:32:10 crc kubenswrapper[4811]: I0219 10:32:10.837312 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bk8sg" Feb 19 10:32:11 crc kubenswrapper[4811]: I0219 10:32:11.422950 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-bk8sg"] Feb 19 10:32:12 crc kubenswrapper[4811]: I0219 10:32:12.408780 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bk8sg" event={"ID":"36d956c8-5142-4ec4-b5fb-622c05d0e097","Type":"ContainerStarted","Data":"926abf7b0c520e3f2b8cd0fd14ed8a3e11e52c6cf7460a1a9f57f7c1b2cbd25a"} Feb 19 10:32:12 crc kubenswrapper[4811]: I0219 10:32:12.409366 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bk8sg" event={"ID":"36d956c8-5142-4ec4-b5fb-622c05d0e097","Type":"ContainerStarted","Data":"b8c9e24fc6ca3ceabb6c8f697b0e857a69b8c55dca4ca91e2be6d0f57019e634"} Feb 19 10:32:12 crc kubenswrapper[4811]: I0219 10:32:12.426181 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bk8sg" podStartSLOduration=1.750083648 podStartE2EDuration="2.426158269s" podCreationTimestamp="2026-02-19 10:32:10 +0000 UTC" firstStartedPulling="2026-02-19 10:32:11.426367209 +0000 UTC m=+1419.952831885" lastFinishedPulling="2026-02-19 10:32:12.10244182 +0000 UTC m=+1420.628906506" observedRunningTime="2026-02-19 10:32:12.424090767 +0000 UTC m=+1420.950555453" watchObservedRunningTime="2026-02-19 10:32:12.426158269 +0000 UTC m=+1420.952622965" Feb 19 10:32:13 crc kubenswrapper[4811]: I0219 10:32:13.287840 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 19 10:32:15 crc kubenswrapper[4811]: I0219 10:32:15.441003 4811 generic.go:334] "Generic (PLEG): container finished" podID="36d956c8-5142-4ec4-b5fb-622c05d0e097" containerID="926abf7b0c520e3f2b8cd0fd14ed8a3e11e52c6cf7460a1a9f57f7c1b2cbd25a" exitCode=0 Feb 19 10:32:15 crc kubenswrapper[4811]: I0219 10:32:15.441052 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bk8sg" event={"ID":"36d956c8-5142-4ec4-b5fb-622c05d0e097","Type":"ContainerDied","Data":"926abf7b0c520e3f2b8cd0fd14ed8a3e11e52c6cf7460a1a9f57f7c1b2cbd25a"} Feb 19 10:32:16 crc kubenswrapper[4811]: I0219 10:32:16.851076 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bk8sg" Feb 19 10:32:16 crc kubenswrapper[4811]: I0219 10:32:16.937906 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98t9k\" (UniqueName: \"kubernetes.io/projected/36d956c8-5142-4ec4-b5fb-622c05d0e097-kube-api-access-98t9k\") pod \"36d956c8-5142-4ec4-b5fb-622c05d0e097\" (UID: \"36d956c8-5142-4ec4-b5fb-622c05d0e097\") " Feb 19 10:32:16 crc kubenswrapper[4811]: I0219 10:32:16.938098 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36d956c8-5142-4ec4-b5fb-622c05d0e097-ssh-key-openstack-edpm-ipam\") pod \"36d956c8-5142-4ec4-b5fb-622c05d0e097\" (UID: \"36d956c8-5142-4ec4-b5fb-622c05d0e097\") " Feb 19 10:32:16 crc kubenswrapper[4811]: I0219 10:32:16.938174 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36d956c8-5142-4ec4-b5fb-622c05d0e097-inventory\") pod \"36d956c8-5142-4ec4-b5fb-622c05d0e097\" (UID: \"36d956c8-5142-4ec4-b5fb-622c05d0e097\") " Feb 19 10:32:16 crc kubenswrapper[4811]: I0219 10:32:16.945806 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36d956c8-5142-4ec4-b5fb-622c05d0e097-kube-api-access-98t9k" (OuterVolumeSpecName: "kube-api-access-98t9k") pod "36d956c8-5142-4ec4-b5fb-622c05d0e097" (UID: "36d956c8-5142-4ec4-b5fb-622c05d0e097"). InnerVolumeSpecName "kube-api-access-98t9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:32:16 crc kubenswrapper[4811]: I0219 10:32:16.970304 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d956c8-5142-4ec4-b5fb-622c05d0e097-inventory" (OuterVolumeSpecName: "inventory") pod "36d956c8-5142-4ec4-b5fb-622c05d0e097" (UID: "36d956c8-5142-4ec4-b5fb-622c05d0e097"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:32:16 crc kubenswrapper[4811]: I0219 10:32:16.971947 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d956c8-5142-4ec4-b5fb-622c05d0e097-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "36d956c8-5142-4ec4-b5fb-622c05d0e097" (UID: "36d956c8-5142-4ec4-b5fb-622c05d0e097"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:32:17 crc kubenswrapper[4811]: I0219 10:32:17.041016 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98t9k\" (UniqueName: \"kubernetes.io/projected/36d956c8-5142-4ec4-b5fb-622c05d0e097-kube-api-access-98t9k\") on node \"crc\" DevicePath \"\"" Feb 19 10:32:17 crc kubenswrapper[4811]: I0219 10:32:17.041057 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36d956c8-5142-4ec4-b5fb-622c05d0e097-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:32:17 crc kubenswrapper[4811]: I0219 10:32:17.041072 4811 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36d956c8-5142-4ec4-b5fb-622c05d0e097-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:32:17 crc kubenswrapper[4811]: I0219 10:32:17.478680 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bk8sg" event={"ID":"36d956c8-5142-4ec4-b5fb-622c05d0e097","Type":"ContainerDied","Data":"b8c9e24fc6ca3ceabb6c8f697b0e857a69b8c55dca4ca91e2be6d0f57019e634"} Feb 19 10:32:17 crc kubenswrapper[4811]: I0219 10:32:17.478747 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8c9e24fc6ca3ceabb6c8f697b0e857a69b8c55dca4ca91e2be6d0f57019e634" Feb 19 10:32:17 crc kubenswrapper[4811]: I0219 10:32:17.478867 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bk8sg" Feb 19 10:32:17 crc kubenswrapper[4811]: I0219 10:32:17.582722 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-97bxz"] Feb 19 10:32:17 crc kubenswrapper[4811]: E0219 10:32:17.583463 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36d956c8-5142-4ec4-b5fb-622c05d0e097" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 19 10:32:17 crc kubenswrapper[4811]: I0219 10:32:17.583488 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="36d956c8-5142-4ec4-b5fb-622c05d0e097" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 19 10:32:17 crc kubenswrapper[4811]: I0219 10:32:17.583705 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="36d956c8-5142-4ec4-b5fb-622c05d0e097" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 19 10:32:17 crc kubenswrapper[4811]: I0219 10:32:17.584892 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-97bxz" Feb 19 10:32:17 crc kubenswrapper[4811]: I0219 10:32:17.596557 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-97bxz"] Feb 19 10:32:17 crc kubenswrapper[4811]: I0219 10:32:17.624926 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wtknk" Feb 19 10:32:17 crc kubenswrapper[4811]: I0219 10:32:17.625244 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:32:17 crc kubenswrapper[4811]: I0219 10:32:17.625056 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:32:17 crc kubenswrapper[4811]: I0219 10:32:17.625574 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:32:17 crc kubenswrapper[4811]: I0219 10:32:17.656993 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ce25cb1-51ce-4643-9fac-e7fa4b1afce9-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-97bxz\" (UID: \"7ce25cb1-51ce-4643-9fac-e7fa4b1afce9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-97bxz" Feb 19 10:32:17 crc kubenswrapper[4811]: I0219 10:32:17.657310 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ce25cb1-51ce-4643-9fac-e7fa4b1afce9-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-97bxz\" (UID: \"7ce25cb1-51ce-4643-9fac-e7fa4b1afce9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-97bxz" Feb 19 10:32:17 crc kubenswrapper[4811]: I0219 10:32:17.657379 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzqh2\" (UniqueName: \"kubernetes.io/projected/7ce25cb1-51ce-4643-9fac-e7fa4b1afce9-kube-api-access-gzqh2\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-97bxz\" (UID: \"7ce25cb1-51ce-4643-9fac-e7fa4b1afce9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-97bxz" Feb 19 10:32:17 crc kubenswrapper[4811]: I0219 10:32:17.657441 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce25cb1-51ce-4643-9fac-e7fa4b1afce9-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-97bxz\" (UID: \"7ce25cb1-51ce-4643-9fac-e7fa4b1afce9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-97bxz" Feb 19 10:32:17 crc kubenswrapper[4811]: I0219 10:32:17.760271 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ce25cb1-51ce-4643-9fac-e7fa4b1afce9-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-97bxz\" (UID: \"7ce25cb1-51ce-4643-9fac-e7fa4b1afce9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-97bxz" Feb 19 10:32:17 crc kubenswrapper[4811]: I0219 10:32:17.760932 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzqh2\" (UniqueName: \"kubernetes.io/projected/7ce25cb1-51ce-4643-9fac-e7fa4b1afce9-kube-api-access-gzqh2\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-97bxz\" (UID: \"7ce25cb1-51ce-4643-9fac-e7fa4b1afce9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-97bxz" Feb 19 10:32:17 crc kubenswrapper[4811]: I0219 10:32:17.760974 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce25cb1-51ce-4643-9fac-e7fa4b1afce9-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-97bxz\" (UID: \"7ce25cb1-51ce-4643-9fac-e7fa4b1afce9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-97bxz" Feb 19 10:32:17 crc kubenswrapper[4811]: I0219 10:32:17.761058 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ce25cb1-51ce-4643-9fac-e7fa4b1afce9-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-97bxz\" (UID: \"7ce25cb1-51ce-4643-9fac-e7fa4b1afce9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-97bxz" Feb 19 10:32:17 crc kubenswrapper[4811]: I0219 10:32:17.767938 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce25cb1-51ce-4643-9fac-e7fa4b1afce9-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-97bxz\" (UID: \"7ce25cb1-51ce-4643-9fac-e7fa4b1afce9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-97bxz" Feb 19 10:32:17 crc kubenswrapper[4811]: I0219 10:32:17.773135 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ce25cb1-51ce-4643-9fac-e7fa4b1afce9-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-97bxz\" (UID: \"7ce25cb1-51ce-4643-9fac-e7fa4b1afce9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-97bxz" Feb 19 10:32:17 crc kubenswrapper[4811]: I0219 10:32:17.774434 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ce25cb1-51ce-4643-9fac-e7fa4b1afce9-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-97bxz\" (UID: \"7ce25cb1-51ce-4643-9fac-e7fa4b1afce9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-97bxz" Feb 19 10:32:17 crc kubenswrapper[4811]: I0219 10:32:17.786752 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzqh2\" (UniqueName: \"kubernetes.io/projected/7ce25cb1-51ce-4643-9fac-e7fa4b1afce9-kube-api-access-gzqh2\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-97bxz\" (UID: \"7ce25cb1-51ce-4643-9fac-e7fa4b1afce9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-97bxz" Feb 19 10:32:17 crc kubenswrapper[4811]: I0219 10:32:17.960143 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-97bxz" Feb 19 10:32:18 crc kubenswrapper[4811]: I0219 10:32:18.686641 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-97bxz"] Feb 19 10:32:19 crc kubenswrapper[4811]: I0219 10:32:19.498544 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-97bxz" event={"ID":"7ce25cb1-51ce-4643-9fac-e7fa4b1afce9","Type":"ContainerStarted","Data":"4b010c5beb65cfdcc2f251c8e6f37fe5b85c492624d56450cdbc871a356217c0"} Feb 19 10:32:19 crc kubenswrapper[4811]: I0219 10:32:19.499167 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-97bxz" event={"ID":"7ce25cb1-51ce-4643-9fac-e7fa4b1afce9","Type":"ContainerStarted","Data":"56d319baf283d76ef42bb0d93549de53843bbcf715e67356d8c91154d0649ab0"} Feb 19 10:32:19 crc kubenswrapper[4811]: I0219 10:32:19.515975 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-97bxz" podStartSLOduration=2.083268383 podStartE2EDuration="2.515951756s" podCreationTimestamp="2026-02-19 10:32:17 +0000 UTC" firstStartedPulling="2026-02-19 10:32:18.687331059 +0000 UTC m=+1427.213795745" lastFinishedPulling="2026-02-19 10:32:19.120014432 +0000 UTC m=+1427.646479118" observedRunningTime="2026-02-19 10:32:19.511699598 +0000 UTC m=+1428.038164294" watchObservedRunningTime="2026-02-19 10:32:19.515951756 +0000 UTC m=+1428.042416442" Feb 19 10:32:26 crc kubenswrapper[4811]: I0219 10:32:26.788051 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:32:26 crc kubenswrapper[4811]: I0219 10:32:26.788692 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:32:46 crc kubenswrapper[4811]: I0219 10:32:46.712920 4811 scope.go:117] "RemoveContainer" containerID="f0ee88a943abf03f9f0bdfcb2e6f6fb44678c9352a36a815d3bd5cbde010af1a" Feb 19 10:32:46 crc kubenswrapper[4811]: I0219 10:32:46.744932 4811 scope.go:117] "RemoveContainer" containerID="3aa5b3ed12c1474392602f778fb91ebade5465a969eaff3e1726fba5007fa711" Feb 19 10:32:46 crc kubenswrapper[4811]: I0219 10:32:46.806570 4811 scope.go:117] "RemoveContainer" containerID="d9655df4b91bafa3e771ef345d5f6edc51398eec9e08ccdc8f1eec6c229e5dc4" Feb 19 10:32:56 crc kubenswrapper[4811]: I0219 10:32:56.788654 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:32:56 crc kubenswrapper[4811]: I0219 10:32:56.789257 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:32:56 crc kubenswrapper[4811]: I0219 10:32:56.789301 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" Feb 19 10:32:56 crc kubenswrapper[4811]: I0219 10:32:56.790113 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5583b6e6746c91e1b9699411335f3051851aaa5bd7775073594504ef9d9cecbd"} pod="openshift-machine-config-operator/machine-config-daemon-v97zm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:32:56 crc kubenswrapper[4811]: I0219 10:32:56.790164 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" containerID="cri-o://5583b6e6746c91e1b9699411335f3051851aaa5bd7775073594504ef9d9cecbd" gracePeriod=600 Feb 19 10:32:57 crc kubenswrapper[4811]: I0219 10:32:57.887537 4811 generic.go:334] "Generic (PLEG): container finished" podID="e2e42c80-bece-4066-abad-05bc661d33f5" containerID="5583b6e6746c91e1b9699411335f3051851aaa5bd7775073594504ef9d9cecbd" exitCode=0 Feb 19 10:32:57 crc kubenswrapper[4811]: I0219 10:32:57.887612 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" event={"ID":"e2e42c80-bece-4066-abad-05bc661d33f5","Type":"ContainerDied","Data":"5583b6e6746c91e1b9699411335f3051851aaa5bd7775073594504ef9d9cecbd"} Feb 19 10:32:57 crc kubenswrapper[4811]: I0219 10:32:57.888265 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" event={"ID":"e2e42c80-bece-4066-abad-05bc661d33f5","Type":"ContainerStarted","Data":"3963827d2c3359bad8b876f1ddce43e835b48fcca50fea9ad20004f5e029bba9"} Feb 19 10:32:57 crc kubenswrapper[4811]: I0219 10:32:57.888288 4811 scope.go:117] "RemoveContainer" containerID="55d19b42477425a3b99b473164b0990b42ac48a5245631a7eb91475e7635a649" Feb 19 10:33:46 crc kubenswrapper[4811]: I0219 10:33:46.874646 4811 scope.go:117] "RemoveContainer" containerID="ed71657de0d32430a5f99fcead929e1d48acaf0662b544a9dc5e99bcdea7df3f" Feb 19 10:33:46 crc kubenswrapper[4811]: I0219 10:33:46.912078 4811 scope.go:117] "RemoveContainer" containerID="fb88796dd5915e8d36949bf388c2c0147a3ae0ad76e34cf9eba67222a6140c0d" Feb 19 10:33:46 crc kubenswrapper[4811]: I0219 10:33:46.942629 4811 scope.go:117] "RemoveContainer" containerID="029c422170c2f579e58e656ba4a7e07a1023925b5dfdc84d59524d7be57d7ef9" Feb 19 10:33:46 crc kubenswrapper[4811]: I0219 10:33:46.987738 4811 scope.go:117] "RemoveContainer" containerID="2c49e7d8650b0afbd2ceea68b5f3ccd33145f22c54a4211f108d54b5a21d51e0" Feb 19 10:33:47 crc kubenswrapper[4811]: I0219 10:33:47.036533 4811 scope.go:117] "RemoveContainer" containerID="da8e033020ef4c4bb22c4e2350708e3908597ed93b4c73737cdfd1bd0a73768f" Feb 19 10:34:47 crc kubenswrapper[4811]: I0219 10:34:47.127867 4811 scope.go:117] "RemoveContainer" containerID="ebacdea3e497af67cbff14f144c2a2137e722e99f432ecba4dbcf23b70e8423b" Feb 19 10:34:47 crc kubenswrapper[4811]: I0219 10:34:47.167693 4811 scope.go:117] "RemoveContainer" containerID="ec8e61ff34c251df2d0255f1a318fe8613e4a234d6bca515d97b64092128f36c" Feb 19 10:34:47 crc kubenswrapper[4811]: I0219 10:34:47.199147 4811 scope.go:117] "RemoveContainer" containerID="13dc6272b24e4ab04f47b68b9d225cec1e92124622717bc8eacc5a853f59d7fc" Feb 19 10:34:47 crc kubenswrapper[4811]: I0219 10:34:47.245621 4811 scope.go:117] "RemoveContainer" containerID="9a894e26346c07705d4812963e8e5a722198c93fa5a9a993f6af35a1d92e8d9c" Feb 19 10:34:47 crc kubenswrapper[4811]: I0219 10:34:47.275372 4811 scope.go:117] "RemoveContainer" containerID="5b1790bb12b9820123c9a97faa6e082b8522f146c8aea05c5ebdb448083c6d18" Feb 19 10:34:47 crc kubenswrapper[4811]: I0219 10:34:47.307990 4811 scope.go:117] "RemoveContainer" containerID="cbc61300c164e874e274e655761b10cad62b657817845c4096024dc305776509" Feb 19 10:35:05 crc kubenswrapper[4811]: I0219 10:35:05.279022 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dw5gp"] Feb 19 10:35:05 crc kubenswrapper[4811]: I0219 10:35:05.283534 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dw5gp" Feb 19 10:35:05 crc kubenswrapper[4811]: I0219 10:35:05.293052 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dw5gp"] Feb 19 10:35:05 crc kubenswrapper[4811]: I0219 10:35:05.375595 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd72a21e-5261-49df-9bcd-3fdb9817bd8d-utilities\") pod \"certified-operators-dw5gp\" (UID: \"fd72a21e-5261-49df-9bcd-3fdb9817bd8d\") " pod="openshift-marketplace/certified-operators-dw5gp" Feb 19 10:35:05 crc kubenswrapper[4811]: I0219 10:35:05.375710 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd72a21e-5261-49df-9bcd-3fdb9817bd8d-catalog-content\") pod \"certified-operators-dw5gp\" (UID: \"fd72a21e-5261-49df-9bcd-3fdb9817bd8d\") " pod="openshift-marketplace/certified-operators-dw5gp" Feb 19 10:35:05 crc kubenswrapper[4811]: I0219 10:35:05.375926 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsrkn\" (UniqueName: \"kubernetes.io/projected/fd72a21e-5261-49df-9bcd-3fdb9817bd8d-kube-api-access-xsrkn\") pod \"certified-operators-dw5gp\" (UID: \"fd72a21e-5261-49df-9bcd-3fdb9817bd8d\") " pod="openshift-marketplace/certified-operators-dw5gp" Feb 19 10:35:05 crc kubenswrapper[4811]: I0219 10:35:05.478627 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd72a21e-5261-49df-9bcd-3fdb9817bd8d-utilities\") pod \"certified-operators-dw5gp\" (UID: \"fd72a21e-5261-49df-9bcd-3fdb9817bd8d\") " pod="openshift-marketplace/certified-operators-dw5gp" Feb 19 10:35:05 crc kubenswrapper[4811]: I0219 10:35:05.478704 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd72a21e-5261-49df-9bcd-3fdb9817bd8d-catalog-content\") pod \"certified-operators-dw5gp\" (UID: \"fd72a21e-5261-49df-9bcd-3fdb9817bd8d\") " pod="openshift-marketplace/certified-operators-dw5gp" Feb 19 10:35:05 crc kubenswrapper[4811]: I0219 10:35:05.478876 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsrkn\" (UniqueName: \"kubernetes.io/projected/fd72a21e-5261-49df-9bcd-3fdb9817bd8d-kube-api-access-xsrkn\") pod \"certified-operators-dw5gp\" (UID: \"fd72a21e-5261-49df-9bcd-3fdb9817bd8d\") " pod="openshift-marketplace/certified-operators-dw5gp" Feb 19 10:35:05 crc kubenswrapper[4811]: I0219 10:35:05.479442 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd72a21e-5261-49df-9bcd-3fdb9817bd8d-utilities\") pod \"certified-operators-dw5gp\" (UID: \"fd72a21e-5261-49df-9bcd-3fdb9817bd8d\") " pod="openshift-marketplace/certified-operators-dw5gp" Feb 19 10:35:05 crc kubenswrapper[4811]: I0219 10:35:05.479635 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd72a21e-5261-49df-9bcd-3fdb9817bd8d-catalog-content\") pod \"certified-operators-dw5gp\" (UID: \"fd72a21e-5261-49df-9bcd-3fdb9817bd8d\") " pod="openshift-marketplace/certified-operators-dw5gp" Feb 19 10:35:05 crc kubenswrapper[4811]: I0219 10:35:05.508529 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsrkn\" (UniqueName: \"kubernetes.io/projected/fd72a21e-5261-49df-9bcd-3fdb9817bd8d-kube-api-access-xsrkn\") pod \"certified-operators-dw5gp\" (UID: \"fd72a21e-5261-49df-9bcd-3fdb9817bd8d\") " pod="openshift-marketplace/certified-operators-dw5gp" Feb 19 10:35:05 crc kubenswrapper[4811]: I0219 10:35:05.612135 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dw5gp" Feb 19 10:35:06 crc kubenswrapper[4811]: I0219 10:35:06.217599 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dw5gp"] Feb 19 10:35:06 crc kubenswrapper[4811]: I0219 10:35:06.355880 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dw5gp" event={"ID":"fd72a21e-5261-49df-9bcd-3fdb9817bd8d","Type":"ContainerStarted","Data":"2fd8ac3df6b855a147bb3ba4ad8a272b38ffd2a2bccf638af627b485899aa15f"} Feb 19 10:35:07 crc kubenswrapper[4811]: I0219 10:35:07.369016 4811 generic.go:334] "Generic (PLEG): container finished" podID="fd72a21e-5261-49df-9bcd-3fdb9817bd8d" containerID="3fc8a116db5e8351a29bad217667ddc92ec08f0c808dda17130adc35fc06d5e7" exitCode=0 Feb 19 10:35:07 crc kubenswrapper[4811]: I0219 10:35:07.369147 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dw5gp" event={"ID":"fd72a21e-5261-49df-9bcd-3fdb9817bd8d","Type":"ContainerDied","Data":"3fc8a116db5e8351a29bad217667ddc92ec08f0c808dda17130adc35fc06d5e7"} Feb 19 10:35:08 crc kubenswrapper[4811]: I0219 10:35:08.382400 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dw5gp" event={"ID":"fd72a21e-5261-49df-9bcd-3fdb9817bd8d","Type":"ContainerStarted","Data":"36b6744b7d8e8da25b6ea5669e8bb6480f698a92f0cf0174f3eb7cdf3b519897"} Feb 19 10:35:10 crc kubenswrapper[4811]: I0219 10:35:10.404803 4811 generic.go:334] "Generic (PLEG): container finished" podID="fd72a21e-5261-49df-9bcd-3fdb9817bd8d" containerID="36b6744b7d8e8da25b6ea5669e8bb6480f698a92f0cf0174f3eb7cdf3b519897" exitCode=0 Feb 19 10:35:10 crc kubenswrapper[4811]: I0219 10:35:10.404884 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dw5gp" event={"ID":"fd72a21e-5261-49df-9bcd-3fdb9817bd8d","Type":"ContainerDied","Data":"36b6744b7d8e8da25b6ea5669e8bb6480f698a92f0cf0174f3eb7cdf3b519897"} Feb 19 10:35:11 crc kubenswrapper[4811]: I0219 10:35:11.421337 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dw5gp" event={"ID":"fd72a21e-5261-49df-9bcd-3fdb9817bd8d","Type":"ContainerStarted","Data":"9e25bbf393a19af114d1a8c816cffb7337a01d79795bdee5b6e0d820cb717375"} Feb 19 10:35:11 crc kubenswrapper[4811]: I0219 10:35:11.461012 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dw5gp" podStartSLOduration=3.038594452 podStartE2EDuration="6.46096968s" podCreationTimestamp="2026-02-19 10:35:05 +0000 UTC" firstStartedPulling="2026-02-19 10:35:07.371655182 +0000 UTC m=+1595.898119868" lastFinishedPulling="2026-02-19 10:35:10.79403039 +0000 UTC m=+1599.320495096" observedRunningTime="2026-02-19 10:35:11.445997008 +0000 UTC m=+1599.972461694" watchObservedRunningTime="2026-02-19 10:35:11.46096968 +0000 UTC m=+1599.987434386" Feb 19 10:35:15 crc kubenswrapper[4811]: I0219 10:35:15.612368 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dw5gp" Feb 19 10:35:15 crc kubenswrapper[4811]: I0219 10:35:15.612989 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dw5gp" Feb 19 10:35:15 crc kubenswrapper[4811]: I0219 10:35:15.670104 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dw5gp" Feb 19 10:35:16 crc kubenswrapper[4811]: I0219 10:35:16.514935 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dw5gp" Feb 19 10:35:16 crc kubenswrapper[4811]: I0219 10:35:16.581016 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dw5gp"] Feb 19 10:35:18 crc kubenswrapper[4811]: I0219 10:35:18.496769 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dw5gp" podUID="fd72a21e-5261-49df-9bcd-3fdb9817bd8d" containerName="registry-server" containerID="cri-o://9e25bbf393a19af114d1a8c816cffb7337a01d79795bdee5b6e0d820cb717375" gracePeriod=2 Feb 19 10:35:19 crc kubenswrapper[4811]: I0219 10:35:19.038294 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dw5gp" Feb 19 10:35:19 crc kubenswrapper[4811]: I0219 10:35:19.194844 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd72a21e-5261-49df-9bcd-3fdb9817bd8d-utilities\") pod \"fd72a21e-5261-49df-9bcd-3fdb9817bd8d\" (UID: \"fd72a21e-5261-49df-9bcd-3fdb9817bd8d\") " Feb 19 10:35:19 crc kubenswrapper[4811]: I0219 10:35:19.194917 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd72a21e-5261-49df-9bcd-3fdb9817bd8d-catalog-content\") pod \"fd72a21e-5261-49df-9bcd-3fdb9817bd8d\" (UID: \"fd72a21e-5261-49df-9bcd-3fdb9817bd8d\") " Feb 19 10:35:19 crc kubenswrapper[4811]: I0219 10:35:19.194966 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsrkn\" (UniqueName: \"kubernetes.io/projected/fd72a21e-5261-49df-9bcd-3fdb9817bd8d-kube-api-access-xsrkn\") pod \"fd72a21e-5261-49df-9bcd-3fdb9817bd8d\" (UID: \"fd72a21e-5261-49df-9bcd-3fdb9817bd8d\") " Feb 19 10:35:19 crc kubenswrapper[4811]: I0219 10:35:19.203499 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd72a21e-5261-49df-9bcd-3fdb9817bd8d-utilities" (OuterVolumeSpecName: "utilities") pod "fd72a21e-5261-49df-9bcd-3fdb9817bd8d" (UID: "fd72a21e-5261-49df-9bcd-3fdb9817bd8d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:35:19 crc kubenswrapper[4811]: I0219 10:35:19.206170 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd72a21e-5261-49df-9bcd-3fdb9817bd8d-kube-api-access-xsrkn" (OuterVolumeSpecName: "kube-api-access-xsrkn") pod "fd72a21e-5261-49df-9bcd-3fdb9817bd8d" (UID: "fd72a21e-5261-49df-9bcd-3fdb9817bd8d"). InnerVolumeSpecName "kube-api-access-xsrkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:35:19 crc kubenswrapper[4811]: I0219 10:35:19.252544 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd72a21e-5261-49df-9bcd-3fdb9817bd8d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd72a21e-5261-49df-9bcd-3fdb9817bd8d" (UID: "fd72a21e-5261-49df-9bcd-3fdb9817bd8d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:35:19 crc kubenswrapper[4811]: I0219 10:35:19.297331 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd72a21e-5261-49df-9bcd-3fdb9817bd8d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:35:19 crc kubenswrapper[4811]: I0219 10:35:19.297365 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd72a21e-5261-49df-9bcd-3fdb9817bd8d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:35:19 crc kubenswrapper[4811]: I0219 10:35:19.297376 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsrkn\" (UniqueName: \"kubernetes.io/projected/fd72a21e-5261-49df-9bcd-3fdb9817bd8d-kube-api-access-xsrkn\") on node \"crc\" DevicePath \"\"" Feb 19 10:35:19 crc kubenswrapper[4811]: I0219 10:35:19.498921 4811 generic.go:334] "Generic (PLEG): container finished" podID="fd72a21e-5261-49df-9bcd-3fdb9817bd8d" containerID="9e25bbf393a19af114d1a8c816cffb7337a01d79795bdee5b6e0d820cb717375" exitCode=0 Feb 19 10:35:19 crc kubenswrapper[4811]: I0219 10:35:19.498975 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dw5gp" Feb 19 10:35:19 crc kubenswrapper[4811]: I0219 10:35:19.498980 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dw5gp" event={"ID":"fd72a21e-5261-49df-9bcd-3fdb9817bd8d","Type":"ContainerDied","Data":"9e25bbf393a19af114d1a8c816cffb7337a01d79795bdee5b6e0d820cb717375"} Feb 19 10:35:19 crc kubenswrapper[4811]: I0219 10:35:19.499016 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dw5gp" event={"ID":"fd72a21e-5261-49df-9bcd-3fdb9817bd8d","Type":"ContainerDied","Data":"2fd8ac3df6b855a147bb3ba4ad8a272b38ffd2a2bccf638af627b485899aa15f"} Feb 19 10:35:19 crc kubenswrapper[4811]: I0219 10:35:19.499039 4811 scope.go:117] "RemoveContainer" containerID="9e25bbf393a19af114d1a8c816cffb7337a01d79795bdee5b6e0d820cb717375" Feb 19 10:35:19 crc kubenswrapper[4811]: I0219 10:35:19.529165 4811 scope.go:117] "RemoveContainer" containerID="36b6744b7d8e8da25b6ea5669e8bb6480f698a92f0cf0174f3eb7cdf3b519897" Feb 19 10:35:19 crc kubenswrapper[4811]: I0219 10:35:19.561973 4811 scope.go:117] "RemoveContainer" containerID="3fc8a116db5e8351a29bad217667ddc92ec08f0c808dda17130adc35fc06d5e7" Feb 19 10:35:19 crc kubenswrapper[4811]: I0219 10:35:19.569063 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dw5gp"] Feb 19 10:35:19 crc kubenswrapper[4811]: I0219 10:35:19.585684 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dw5gp"] Feb 19 10:35:19 crc kubenswrapper[4811]: I0219 10:35:19.766617 4811 scope.go:117] "RemoveContainer" containerID="9e25bbf393a19af114d1a8c816cffb7337a01d79795bdee5b6e0d820cb717375" Feb 19 10:35:19 crc kubenswrapper[4811]: E0219 10:35:19.768567 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e25bbf393a19af114d1a8c816cffb7337a01d79795bdee5b6e0d820cb717375\": container with ID starting with 9e25bbf393a19af114d1a8c816cffb7337a01d79795bdee5b6e0d820cb717375 not found: ID does not exist" containerID="9e25bbf393a19af114d1a8c816cffb7337a01d79795bdee5b6e0d820cb717375" Feb 19 10:35:19 crc kubenswrapper[4811]: I0219 10:35:19.768604 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e25bbf393a19af114d1a8c816cffb7337a01d79795bdee5b6e0d820cb717375"} err="failed to get container status \"9e25bbf393a19af114d1a8c816cffb7337a01d79795bdee5b6e0d820cb717375\": rpc error: code = NotFound desc = could not find container \"9e25bbf393a19af114d1a8c816cffb7337a01d79795bdee5b6e0d820cb717375\": container with ID starting with 9e25bbf393a19af114d1a8c816cffb7337a01d79795bdee5b6e0d820cb717375 not found: ID does not exist" Feb 19 10:35:19 crc kubenswrapper[4811]: I0219 10:35:19.768629 4811 scope.go:117] "RemoveContainer" containerID="36b6744b7d8e8da25b6ea5669e8bb6480f698a92f0cf0174f3eb7cdf3b519897" Feb 19 10:35:19 crc kubenswrapper[4811]: E0219 10:35:19.770084 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36b6744b7d8e8da25b6ea5669e8bb6480f698a92f0cf0174f3eb7cdf3b519897\": container with ID starting with 36b6744b7d8e8da25b6ea5669e8bb6480f698a92f0cf0174f3eb7cdf3b519897 not found: ID does not exist" containerID="36b6744b7d8e8da25b6ea5669e8bb6480f698a92f0cf0174f3eb7cdf3b519897" Feb 19 10:35:19 crc kubenswrapper[4811]: I0219 10:35:19.770107 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36b6744b7d8e8da25b6ea5669e8bb6480f698a92f0cf0174f3eb7cdf3b519897"} err="failed to get container status \"36b6744b7d8e8da25b6ea5669e8bb6480f698a92f0cf0174f3eb7cdf3b519897\": rpc error: code = NotFound desc = could not find container \"36b6744b7d8e8da25b6ea5669e8bb6480f698a92f0cf0174f3eb7cdf3b519897\": container with ID starting with 36b6744b7d8e8da25b6ea5669e8bb6480f698a92f0cf0174f3eb7cdf3b519897 not found: ID does not exist" Feb 19 10:35:19 crc kubenswrapper[4811]: I0219 10:35:19.770120 4811 scope.go:117] "RemoveContainer" containerID="3fc8a116db5e8351a29bad217667ddc92ec08f0c808dda17130adc35fc06d5e7" Feb 19 10:35:19 crc kubenswrapper[4811]: E0219 10:35:19.770321 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fc8a116db5e8351a29bad217667ddc92ec08f0c808dda17130adc35fc06d5e7\": container with ID starting with 3fc8a116db5e8351a29bad217667ddc92ec08f0c808dda17130adc35fc06d5e7 not found: ID does not exist" containerID="3fc8a116db5e8351a29bad217667ddc92ec08f0c808dda17130adc35fc06d5e7" Feb 19 10:35:19 crc kubenswrapper[4811]: I0219 10:35:19.770342 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fc8a116db5e8351a29bad217667ddc92ec08f0c808dda17130adc35fc06d5e7"} err="failed to get container status \"3fc8a116db5e8351a29bad217667ddc92ec08f0c808dda17130adc35fc06d5e7\": rpc error: code = NotFound desc = could not find container \"3fc8a116db5e8351a29bad217667ddc92ec08f0c808dda17130adc35fc06d5e7\": container with ID starting with 3fc8a116db5e8351a29bad217667ddc92ec08f0c808dda17130adc35fc06d5e7 not found: ID does not exist" Feb 19 10:35:20 crc kubenswrapper[4811]: I0219 10:35:20.596683 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd72a21e-5261-49df-9bcd-3fdb9817bd8d" path="/var/lib/kubelet/pods/fd72a21e-5261-49df-9bcd-3fdb9817bd8d/volumes" Feb 19 10:35:26 crc kubenswrapper[4811]: I0219 10:35:26.788026 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:35:26 crc kubenswrapper[4811]: I0219 10:35:26.789192 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:35:28 crc kubenswrapper[4811]: I0219 10:35:28.608985 4811 generic.go:334] "Generic (PLEG): container finished" podID="7ce25cb1-51ce-4643-9fac-e7fa4b1afce9" containerID="4b010c5beb65cfdcc2f251c8e6f37fe5b85c492624d56450cdbc871a356217c0" exitCode=0 Feb 19 10:35:28 crc kubenswrapper[4811]: I0219 10:35:28.609088 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-97bxz" event={"ID":"7ce25cb1-51ce-4643-9fac-e7fa4b1afce9","Type":"ContainerDied","Data":"4b010c5beb65cfdcc2f251c8e6f37fe5b85c492624d56450cdbc871a356217c0"} Feb 19 10:35:30 crc kubenswrapper[4811]: I0219 10:35:30.087335 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-97bxz" Feb 19 10:35:30 crc kubenswrapper[4811]: I0219 10:35:30.270620 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ce25cb1-51ce-4643-9fac-e7fa4b1afce9-ssh-key-openstack-edpm-ipam\") pod \"7ce25cb1-51ce-4643-9fac-e7fa4b1afce9\" (UID: \"7ce25cb1-51ce-4643-9fac-e7fa4b1afce9\") " Feb 19 10:35:30 crc kubenswrapper[4811]: I0219 10:35:30.270731 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce25cb1-51ce-4643-9fac-e7fa4b1afce9-bootstrap-combined-ca-bundle\") pod \"7ce25cb1-51ce-4643-9fac-e7fa4b1afce9\" (UID: \"7ce25cb1-51ce-4643-9fac-e7fa4b1afce9\") " Feb 19 10:35:30 crc kubenswrapper[4811]: I0219 10:35:30.270941 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ce25cb1-51ce-4643-9fac-e7fa4b1afce9-inventory\") pod \"7ce25cb1-51ce-4643-9fac-e7fa4b1afce9\" (UID: \"7ce25cb1-51ce-4643-9fac-e7fa4b1afce9\") " Feb 19 10:35:30 crc kubenswrapper[4811]: I0219 10:35:30.271007 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzqh2\" (UniqueName: \"kubernetes.io/projected/7ce25cb1-51ce-4643-9fac-e7fa4b1afce9-kube-api-access-gzqh2\") pod \"7ce25cb1-51ce-4643-9fac-e7fa4b1afce9\" (UID: \"7ce25cb1-51ce-4643-9fac-e7fa4b1afce9\") " Feb 19 10:35:30 crc kubenswrapper[4811]: I0219 10:35:30.278042 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce25cb1-51ce-4643-9fac-e7fa4b1afce9-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "7ce25cb1-51ce-4643-9fac-e7fa4b1afce9" (UID: "7ce25cb1-51ce-4643-9fac-e7fa4b1afce9"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:35:30 crc kubenswrapper[4811]: I0219 10:35:30.278329 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ce25cb1-51ce-4643-9fac-e7fa4b1afce9-kube-api-access-gzqh2" (OuterVolumeSpecName: "kube-api-access-gzqh2") pod "7ce25cb1-51ce-4643-9fac-e7fa4b1afce9" (UID: "7ce25cb1-51ce-4643-9fac-e7fa4b1afce9"). InnerVolumeSpecName "kube-api-access-gzqh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:35:30 crc kubenswrapper[4811]: I0219 10:35:30.299324 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce25cb1-51ce-4643-9fac-e7fa4b1afce9-inventory" (OuterVolumeSpecName: "inventory") pod "7ce25cb1-51ce-4643-9fac-e7fa4b1afce9" (UID: "7ce25cb1-51ce-4643-9fac-e7fa4b1afce9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:35:30 crc kubenswrapper[4811]: I0219 10:35:30.311728 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce25cb1-51ce-4643-9fac-e7fa4b1afce9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7ce25cb1-51ce-4643-9fac-e7fa4b1afce9" (UID: "7ce25cb1-51ce-4643-9fac-e7fa4b1afce9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:35:30 crc kubenswrapper[4811]: I0219 10:35:30.374159 4811 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ce25cb1-51ce-4643-9fac-e7fa4b1afce9-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:35:30 crc kubenswrapper[4811]: I0219 10:35:30.374205 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzqh2\" (UniqueName: \"kubernetes.io/projected/7ce25cb1-51ce-4643-9fac-e7fa4b1afce9-kube-api-access-gzqh2\") on node \"crc\" DevicePath \"\"" Feb 19 10:35:30 crc kubenswrapper[4811]: I0219 10:35:30.374221 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ce25cb1-51ce-4643-9fac-e7fa4b1afce9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:35:30 crc kubenswrapper[4811]: I0219 10:35:30.374235 4811 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce25cb1-51ce-4643-9fac-e7fa4b1afce9-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:35:30 crc kubenswrapper[4811]: I0219 10:35:30.634483 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-97bxz" event={"ID":"7ce25cb1-51ce-4643-9fac-e7fa4b1afce9","Type":"ContainerDied","Data":"56d319baf283d76ef42bb0d93549de53843bbcf715e67356d8c91154d0649ab0"} Feb 19 10:35:30 crc kubenswrapper[4811]: I0219 10:35:30.634537 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56d319baf283d76ef42bb0d93549de53843bbcf715e67356d8c91154d0649ab0" Feb 19 10:35:30 crc kubenswrapper[4811]: I0219 10:35:30.634564 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-97bxz" Feb 19 10:35:30 crc kubenswrapper[4811]: I0219 10:35:30.729386 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwdvx"] Feb 19 10:35:30 crc kubenswrapper[4811]: E0219 10:35:30.730061 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd72a21e-5261-49df-9bcd-3fdb9817bd8d" containerName="extract-content" Feb 19 10:35:30 crc kubenswrapper[4811]: I0219 10:35:30.730085 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd72a21e-5261-49df-9bcd-3fdb9817bd8d" containerName="extract-content" Feb 19 10:35:30 crc kubenswrapper[4811]: E0219 10:35:30.730119 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd72a21e-5261-49df-9bcd-3fdb9817bd8d" containerName="extract-utilities" Feb 19 10:35:30 crc kubenswrapper[4811]: I0219 10:35:30.730130 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd72a21e-5261-49df-9bcd-3fdb9817bd8d" containerName="extract-utilities" Feb 19 10:35:30 crc kubenswrapper[4811]: E0219 10:35:30.730151 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd72a21e-5261-49df-9bcd-3fdb9817bd8d" containerName="registry-server" Feb 19 10:35:30 crc kubenswrapper[4811]: I0219 10:35:30.730158 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd72a21e-5261-49df-9bcd-3fdb9817bd8d" containerName="registry-server" Feb 19 10:35:30 crc kubenswrapper[4811]: E0219 10:35:30.730184 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce25cb1-51ce-4643-9fac-e7fa4b1afce9" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 19 10:35:30 crc kubenswrapper[4811]: I0219 10:35:30.730193 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce25cb1-51ce-4643-9fac-e7fa4b1afce9" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 19 10:35:30 crc kubenswrapper[4811]: I0219 10:35:30.730430 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd72a21e-5261-49df-9bcd-3fdb9817bd8d" containerName="registry-server" Feb 19 10:35:30 crc kubenswrapper[4811]: I0219 10:35:30.730469 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ce25cb1-51ce-4643-9fac-e7fa4b1afce9" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 19 10:35:30 crc kubenswrapper[4811]: I0219 10:35:30.731266 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwdvx" Feb 19 10:35:30 crc kubenswrapper[4811]: I0219 10:35:30.734372 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:35:30 crc kubenswrapper[4811]: I0219 10:35:30.734563 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wtknk" Feb 19 10:35:30 crc kubenswrapper[4811]: I0219 10:35:30.736023 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:35:30 crc kubenswrapper[4811]: I0219 10:35:30.738388 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:35:30 crc kubenswrapper[4811]: I0219 10:35:30.745682 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwdvx"] Feb 19 10:35:30 crc kubenswrapper[4811]: I0219 10:35:30.783343 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4lvl\" (UniqueName: \"kubernetes.io/projected/57a5bec4-f1ad-45e6-9045-dea45abd7ccf-kube-api-access-x4lvl\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qwdvx\" (UID: \"57a5bec4-f1ad-45e6-9045-dea45abd7ccf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwdvx" Feb 19 10:35:30 crc kubenswrapper[4811]: I0219 10:35:30.784085 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57a5bec4-f1ad-45e6-9045-dea45abd7ccf-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qwdvx\" (UID: \"57a5bec4-f1ad-45e6-9045-dea45abd7ccf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwdvx" Feb 19 10:35:30 crc kubenswrapper[4811]: I0219 10:35:30.784158 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57a5bec4-f1ad-45e6-9045-dea45abd7ccf-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qwdvx\" (UID: \"57a5bec4-f1ad-45e6-9045-dea45abd7ccf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwdvx" Feb 19 10:35:30 crc kubenswrapper[4811]: I0219 10:35:30.886247 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57a5bec4-f1ad-45e6-9045-dea45abd7ccf-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qwdvx\" (UID: \"57a5bec4-f1ad-45e6-9045-dea45abd7ccf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwdvx" Feb 19 10:35:30 crc kubenswrapper[4811]: I0219 10:35:30.886314 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57a5bec4-f1ad-45e6-9045-dea45abd7ccf-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qwdvx\" (UID: \"57a5bec4-f1ad-45e6-9045-dea45abd7ccf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwdvx" Feb 19 10:35:30 crc kubenswrapper[4811]: I0219 10:35:30.886374 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4lvl\" (UniqueName: \"kubernetes.io/projected/57a5bec4-f1ad-45e6-9045-dea45abd7ccf-kube-api-access-x4lvl\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qwdvx\" (UID: \"57a5bec4-f1ad-45e6-9045-dea45abd7ccf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwdvx" Feb 19 10:35:30 crc kubenswrapper[4811]: I0219 10:35:30.892318 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57a5bec4-f1ad-45e6-9045-dea45abd7ccf-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qwdvx\" (UID: \"57a5bec4-f1ad-45e6-9045-dea45abd7ccf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwdvx" Feb 19 10:35:30 crc kubenswrapper[4811]: I0219 10:35:30.894563 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57a5bec4-f1ad-45e6-9045-dea45abd7ccf-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qwdvx\" (UID: \"57a5bec4-f1ad-45e6-9045-dea45abd7ccf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwdvx" Feb 19 10:35:30 crc kubenswrapper[4811]: I0219 10:35:30.906394 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4lvl\" (UniqueName: \"kubernetes.io/projected/57a5bec4-f1ad-45e6-9045-dea45abd7ccf-kube-api-access-x4lvl\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qwdvx\" (UID: \"57a5bec4-f1ad-45e6-9045-dea45abd7ccf\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwdvx" Feb 19 10:35:31 crc kubenswrapper[4811]: I0219 10:35:31.056166 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwdvx" Feb 19 10:35:31 crc kubenswrapper[4811]: I0219 10:35:31.645506 4811 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 10:35:31 crc kubenswrapper[4811]: I0219 10:35:31.646571 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwdvx"] Feb 19 10:35:32 crc kubenswrapper[4811]: I0219 10:35:32.664751 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwdvx" event={"ID":"57a5bec4-f1ad-45e6-9045-dea45abd7ccf","Type":"ContainerStarted","Data":"1371d43772b54e5667d4afc3ae8b4f692574a871909ec6262cb1262b2d2617c6"} Feb 19 10:35:32 crc kubenswrapper[4811]: I0219 10:35:32.665357 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwdvx" event={"ID":"57a5bec4-f1ad-45e6-9045-dea45abd7ccf","Type":"ContainerStarted","Data":"084a563ed6797acf0fbefacfc1ea74e814569cde129b7960340f7985e70e466c"} Feb 19 10:35:32 crc kubenswrapper[4811]: I0219 10:35:32.690522 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwdvx" podStartSLOduration=2.296620002 podStartE2EDuration="2.690497264s" podCreationTimestamp="2026-02-19 10:35:30 +0000 UTC" firstStartedPulling="2026-02-19 10:35:31.645067554 +0000 UTC m=+1620.171532240" lastFinishedPulling="2026-02-19 10:35:32.038944816 +0000 UTC m=+1620.565409502" observedRunningTime="2026-02-19 10:35:32.682093519 +0000 UTC m=+1621.208558205" watchObservedRunningTime="2026-02-19 10:35:32.690497264 +0000 UTC m=+1621.216961950" Feb 19 10:35:47 crc kubenswrapper[4811]: I0219 10:35:47.409102 4811 scope.go:117] "RemoveContainer" containerID="896bcb8e7d4cc5d48b57d535d5a9d69f875f64ef8ef2192834e7c9d243c1a17e" Feb 19 10:35:56 crc kubenswrapper[4811]: I0219 10:35:56.788300 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:35:56 crc kubenswrapper[4811]: I0219 10:35:56.788966 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:36:13 crc kubenswrapper[4811]: I0219 10:36:13.052312 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-dsmbw"] Feb 19 10:36:13 crc kubenswrapper[4811]: I0219 10:36:13.062033 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-dsmbw"] Feb 19 10:36:14 crc kubenswrapper[4811]: I0219 10:36:14.052269 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-hwhjb"] Feb 19 10:36:14 crc kubenswrapper[4811]: I0219 10:36:14.066626 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-gfqfc"] Feb 19 10:36:14 crc kubenswrapper[4811]: I0219 10:36:14.075322 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-gfqfc"] Feb 19 10:36:14 crc kubenswrapper[4811]: I0219 10:36:14.084038 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-hwhjb"] Feb 19 10:36:14 crc kubenswrapper[4811]: I0219 10:36:14.091726 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-bdcd-account-create-update-x87fb"] Feb 19 10:36:14 crc kubenswrapper[4811]: I0219 10:36:14.100093 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-e501-account-create-update-trk4v"] Feb 19 10:36:14 crc kubenswrapper[4811]: I0219 10:36:14.108808 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-bdcd-account-create-update-x87fb"] Feb 19 10:36:14 crc kubenswrapper[4811]: I0219 10:36:14.128854 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-407c-account-create-update-pn24j"] Feb 19 10:36:14 crc kubenswrapper[4811]: I0219 10:36:14.145297 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-e501-account-create-update-trk4v"] Feb 19 10:36:14 crc kubenswrapper[4811]: I0219 10:36:14.155888 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-407c-account-create-update-pn24j"] Feb 19 10:36:14 crc kubenswrapper[4811]: I0219 10:36:14.595374 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2001e70d-4b7a-4c4d-8ed3-54cedaea54ea" path="/var/lib/kubelet/pods/2001e70d-4b7a-4c4d-8ed3-54cedaea54ea/volumes" Feb 19 10:36:14 crc kubenswrapper[4811]: I0219 10:36:14.595993 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8853f172-601e-4f67-96d3-5b910eba6c9a" path="/var/lib/kubelet/pods/8853f172-601e-4f67-96d3-5b910eba6c9a/volumes" Feb 19 10:36:14 crc kubenswrapper[4811]: I0219 10:36:14.596591 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac222fb2-5295-4eb7-8611-f2a7157f6176" path="/var/lib/kubelet/pods/ac222fb2-5295-4eb7-8611-f2a7157f6176/volumes" Feb 19 10:36:14 crc kubenswrapper[4811]: I0219 10:36:14.597190 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd758757-2f52-4fe1-b24f-4ca697bf6498" path="/var/lib/kubelet/pods/cd758757-2f52-4fe1-b24f-4ca697bf6498/volumes" Feb 19 10:36:14 crc kubenswrapper[4811]: I0219 10:36:14.598309 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2a34c54-3b76-42e5-b4a3-37a238ad1947" path="/var/lib/kubelet/pods/d2a34c54-3b76-42e5-b4a3-37a238ad1947/volumes" Feb 19 10:36:14 crc kubenswrapper[4811]: I0219 10:36:14.598943 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6dc01af-9510-4de2-a0ae-a841cc7bf815" path="/var/lib/kubelet/pods/f6dc01af-9510-4de2-a0ae-a841cc7bf815/volumes" Feb 19 10:36:26 crc kubenswrapper[4811]: I0219 10:36:26.788940 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:36:26 crc kubenswrapper[4811]: I0219 10:36:26.789653 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:36:26 crc kubenswrapper[4811]: I0219 10:36:26.789720 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" Feb 19 10:36:26 crc kubenswrapper[4811]: I0219 10:36:26.790726 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3963827d2c3359bad8b876f1ddce43e835b48fcca50fea9ad20004f5e029bba9"} pod="openshift-machine-config-operator/machine-config-daemon-v97zm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:36:26 crc kubenswrapper[4811]: I0219 10:36:26.790831 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" containerID="cri-o://3963827d2c3359bad8b876f1ddce43e835b48fcca50fea9ad20004f5e029bba9" gracePeriod=600 Feb 19 10:36:26 crc kubenswrapper[4811]: E0219 10:36:26.918658 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:36:27 crc kubenswrapper[4811]: I0219 10:36:27.235959 4811 generic.go:334] "Generic (PLEG): container finished" podID="e2e42c80-bece-4066-abad-05bc661d33f5" containerID="3963827d2c3359bad8b876f1ddce43e835b48fcca50fea9ad20004f5e029bba9" exitCode=0 Feb 19 10:36:27 crc kubenswrapper[4811]: I0219 10:36:27.236034 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" event={"ID":"e2e42c80-bece-4066-abad-05bc661d33f5","Type":"ContainerDied","Data":"3963827d2c3359bad8b876f1ddce43e835b48fcca50fea9ad20004f5e029bba9"} Feb 19 10:36:27 crc kubenswrapper[4811]: I0219 10:36:27.236143 4811 scope.go:117] "RemoveContainer" containerID="5583b6e6746c91e1b9699411335f3051851aaa5bd7775073594504ef9d9cecbd" Feb 19 10:36:27 crc kubenswrapper[4811]: I0219 10:36:27.237173 4811 scope.go:117] "RemoveContainer" containerID="3963827d2c3359bad8b876f1ddce43e835b48fcca50fea9ad20004f5e029bba9" Feb 19 10:36:27 crc kubenswrapper[4811]: E0219 10:36:27.237768 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:36:38 crc kubenswrapper[4811]: I0219 10:36:38.044999 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-6ctsp"] Feb 19 10:36:38 crc kubenswrapper[4811]: I0219 10:36:38.055797 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-6ctsp"] Feb 19 10:36:38 crc kubenswrapper[4811]: I0219 10:36:38.597109 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c18c4186-9fb1-448f-91d4-1cb18a5cc172" path="/var/lib/kubelet/pods/c18c4186-9fb1-448f-91d4-1cb18a5cc172/volumes" Feb 19 10:36:39 crc kubenswrapper[4811]: I0219 10:36:39.585154 4811 scope.go:117] "RemoveContainer" containerID="3963827d2c3359bad8b876f1ddce43e835b48fcca50fea9ad20004f5e029bba9" Feb 19 10:36:39 crc kubenswrapper[4811]: E0219 10:36:39.585545 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:36:45 crc kubenswrapper[4811]: I0219 10:36:45.034292 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-wq4nk"] Feb 19 10:36:45 crc kubenswrapper[4811]: I0219 10:36:45.051554 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-wq4nk"] Feb 19 10:36:46 crc kubenswrapper[4811]: I0219 10:36:46.593556 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="774737fd-2783-4012-b2f7-26966fc085d9" path="/var/lib/kubelet/pods/774737fd-2783-4012-b2f7-26966fc085d9/volumes" Feb 19 10:36:47 crc kubenswrapper[4811]: I0219 10:36:47.487525 4811 scope.go:117] "RemoveContainer" containerID="aa5516e591105c4eec7a2b66b6b9c8a67d04b36ae1ece5fa766c9f44b7c02fdc" Feb 19 10:36:47 crc kubenswrapper[4811]: I0219 10:36:47.520128 4811 scope.go:117] "RemoveContainer" containerID="9e50fa06f94b3a7d7d8ffa16bbf1991a4a20f0f5ab833c20e5f22b844e6655e3" Feb 19 10:36:47 crc kubenswrapper[4811]: I0219 10:36:47.577890 4811 scope.go:117] "RemoveContainer" containerID="6501c8e1318cd53ed167bc7b7b443a15d607b118afd33cc80d528689c41db2e8" Feb 19 10:36:47 crc kubenswrapper[4811]: I0219 10:36:47.619599 4811 scope.go:117] "RemoveContainer" containerID="dca9145c764fa95317a9a216c692c289f7a09ea6a226c7feee7c413906b75919" Feb 19 10:36:47 crc kubenswrapper[4811]: I0219 10:36:47.661501 4811 scope.go:117] "RemoveContainer" containerID="9f967a3ebea9d2591d2dc2599133d5e6aa076323a607f66e1cf33dc6cfe6521a" Feb 19 10:36:47 crc kubenswrapper[4811]: I0219 10:36:47.712646 4811 scope.go:117] "RemoveContainer" containerID="8f59ea20915b24e2a6c5c106c57748d1c9fc01e4ba55ecc3416bb13c7f997880" Feb 19 10:36:47 crc kubenswrapper[4811]: I0219 10:36:47.756072 4811 scope.go:117] "RemoveContainer" containerID="c17e6fd5eeb6b419c26849f83d1668f3dc39420162b853654baa0b8ee58e683b" Feb 19 10:36:47 crc kubenswrapper[4811]: I0219 10:36:47.782610 4811 scope.go:117] "RemoveContainer" containerID="c914bf592c5bef92aa7219d1f7e3fb6ed4cc9e8bd930d693b31b01bbeda82ba4" Feb 19 10:36:49 crc kubenswrapper[4811]: I0219 10:36:49.077404 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-5217-account-create-update-bdx6j"] Feb 19 10:36:49 crc kubenswrapper[4811]: I0219 10:36:49.089695 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-hcwkd"] Feb 19 10:36:49 crc kubenswrapper[4811]: I0219 10:36:49.104422 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8130-account-create-update-trnwt"] Feb 19 10:36:49 crc kubenswrapper[4811]: I0219 10:36:49.114271 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-5217-account-create-update-bdx6j"] Feb 19 10:36:49 crc kubenswrapper[4811]: I0219 10:36:49.124066 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8130-account-create-update-trnwt"] Feb 19 10:36:49 crc kubenswrapper[4811]: I0219 10:36:49.133505 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-hcwkd"] Feb 19 10:36:49 crc kubenswrapper[4811]: I0219 10:36:49.143369 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-xlmv4"] Feb 19 10:36:49 crc kubenswrapper[4811]: I0219 10:36:49.153644 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-2d23-account-create-update-bgrsz"] Feb 19 10:36:49 crc kubenswrapper[4811]: I0219 10:36:49.163336 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-2d23-account-create-update-bgrsz"] Feb 19 10:36:49 crc kubenswrapper[4811]: I0219 10:36:49.172385 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-xlmv4"] Feb 19 10:36:50 crc kubenswrapper[4811]: I0219 10:36:50.595370 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29308f94-60d2-402b-a4f0-76652537e794" path="/var/lib/kubelet/pods/29308f94-60d2-402b-a4f0-76652537e794/volumes" Feb 19 10:36:50 crc kubenswrapper[4811]: I0219 10:36:50.596289 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3390f3ef-b993-411b-85ff-e90bb430960c" path="/var/lib/kubelet/pods/3390f3ef-b993-411b-85ff-e90bb430960c/volumes" Feb 19 10:36:50 crc kubenswrapper[4811]: I0219 10:36:50.596814 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="479fe0ed-404c-4378-b035-e7518e74e8db" path="/var/lib/kubelet/pods/479fe0ed-404c-4378-b035-e7518e74e8db/volumes" Feb 19 10:36:50 crc kubenswrapper[4811]: I0219 10:36:50.597326 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d02e9a5-6f2e-4754-865e-51492f4f81f7" path="/var/lib/kubelet/pods/5d02e9a5-6f2e-4754-865e-51492f4f81f7/volumes" Feb 19 10:36:50 crc kubenswrapper[4811]: I0219 10:36:50.598365 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9bab52b-f5c8-425d-bdc0-601e101eb0a4" path="/var/lib/kubelet/pods/f9bab52b-f5c8-425d-bdc0-601e101eb0a4/volumes" Feb 19 10:36:52 crc kubenswrapper[4811]: I0219 10:36:52.591676 4811 scope.go:117] "RemoveContainer" containerID="3963827d2c3359bad8b876f1ddce43e835b48fcca50fea9ad20004f5e029bba9" Feb 19 10:36:52 crc kubenswrapper[4811]: E0219 10:36:52.592290 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:36:55 crc kubenswrapper[4811]: I0219 10:36:55.049961 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-xq7x5"] Feb 19 10:36:55 crc kubenswrapper[4811]: I0219 10:36:55.061747 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-xq7x5"] Feb 19 10:36:56 crc kubenswrapper[4811]: I0219 10:36:56.596493 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c179a32-5560-4ed7-9e71-aaf2f196fb8e" path="/var/lib/kubelet/pods/9c179a32-5560-4ed7-9e71-aaf2f196fb8e/volumes" Feb 19 10:37:04 crc kubenswrapper[4811]: I0219 10:37:04.584895 4811 scope.go:117] "RemoveContainer" containerID="3963827d2c3359bad8b876f1ddce43e835b48fcca50fea9ad20004f5e029bba9" Feb 19 10:37:04 crc kubenswrapper[4811]: E0219 10:37:04.585951 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:37:15 crc kubenswrapper[4811]: I0219 10:37:15.044474 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-z4vh6"] Feb 19 10:37:15 crc kubenswrapper[4811]: I0219 10:37:15.067003 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-z4vh6"] Feb 19 10:37:15 crc kubenswrapper[4811]: I0219 10:37:15.749700 4811 generic.go:334] "Generic (PLEG): container finished" podID="57a5bec4-f1ad-45e6-9045-dea45abd7ccf" containerID="1371d43772b54e5667d4afc3ae8b4f692574a871909ec6262cb1262b2d2617c6" exitCode=0 Feb 19 10:37:15 crc kubenswrapper[4811]: I0219 10:37:15.749798 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwdvx" event={"ID":"57a5bec4-f1ad-45e6-9045-dea45abd7ccf","Type":"ContainerDied","Data":"1371d43772b54e5667d4afc3ae8b4f692574a871909ec6262cb1262b2d2617c6"} Feb 19 10:37:16 crc kubenswrapper[4811]: I0219 10:37:16.599277 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9133229-76e0-4848-b61e-f70c53216929" path="/var/lib/kubelet/pods/a9133229-76e0-4848-b61e-f70c53216929/volumes" Feb 19 10:37:17 crc kubenswrapper[4811]: I0219 10:37:17.266041 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwdvx" Feb 19 10:37:17 crc kubenswrapper[4811]: I0219 10:37:17.346945 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57a5bec4-f1ad-45e6-9045-dea45abd7ccf-inventory\") pod \"57a5bec4-f1ad-45e6-9045-dea45abd7ccf\" (UID: \"57a5bec4-f1ad-45e6-9045-dea45abd7ccf\") " Feb 19 10:37:17 crc kubenswrapper[4811]: I0219 10:37:17.347226 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4lvl\" (UniqueName: \"kubernetes.io/projected/57a5bec4-f1ad-45e6-9045-dea45abd7ccf-kube-api-access-x4lvl\") pod \"57a5bec4-f1ad-45e6-9045-dea45abd7ccf\" (UID: \"57a5bec4-f1ad-45e6-9045-dea45abd7ccf\") " Feb 19 10:37:17 crc kubenswrapper[4811]: I0219 10:37:17.347284 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57a5bec4-f1ad-45e6-9045-dea45abd7ccf-ssh-key-openstack-edpm-ipam\") pod \"57a5bec4-f1ad-45e6-9045-dea45abd7ccf\" (UID: \"57a5bec4-f1ad-45e6-9045-dea45abd7ccf\") " Feb 19 10:37:17 crc kubenswrapper[4811]: I0219 10:37:17.359113 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a5bec4-f1ad-45e6-9045-dea45abd7ccf-kube-api-access-x4lvl" (OuterVolumeSpecName: "kube-api-access-x4lvl") pod "57a5bec4-f1ad-45e6-9045-dea45abd7ccf" (UID: "57a5bec4-f1ad-45e6-9045-dea45abd7ccf"). InnerVolumeSpecName "kube-api-access-x4lvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:37:17 crc kubenswrapper[4811]: I0219 10:37:17.384690 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57a5bec4-f1ad-45e6-9045-dea45abd7ccf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "57a5bec4-f1ad-45e6-9045-dea45abd7ccf" (UID: "57a5bec4-f1ad-45e6-9045-dea45abd7ccf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:37:17 crc kubenswrapper[4811]: I0219 10:37:17.404476 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57a5bec4-f1ad-45e6-9045-dea45abd7ccf-inventory" (OuterVolumeSpecName: "inventory") pod "57a5bec4-f1ad-45e6-9045-dea45abd7ccf" (UID: "57a5bec4-f1ad-45e6-9045-dea45abd7ccf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:37:17 crc kubenswrapper[4811]: I0219 10:37:17.449727 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4lvl\" (UniqueName: \"kubernetes.io/projected/57a5bec4-f1ad-45e6-9045-dea45abd7ccf-kube-api-access-x4lvl\") on node \"crc\" DevicePath \"\"" Feb 19 10:37:17 crc kubenswrapper[4811]: I0219 10:37:17.449768 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57a5bec4-f1ad-45e6-9045-dea45abd7ccf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:37:17 crc kubenswrapper[4811]: I0219 10:37:17.449784 4811 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57a5bec4-f1ad-45e6-9045-dea45abd7ccf-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:37:17 crc kubenswrapper[4811]: I0219 10:37:17.583590 4811 scope.go:117] "RemoveContainer" containerID="3963827d2c3359bad8b876f1ddce43e835b48fcca50fea9ad20004f5e029bba9" Feb 19 10:37:17 crc kubenswrapper[4811]: E0219 10:37:17.583933 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:37:17 crc kubenswrapper[4811]: I0219 10:37:17.772632 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwdvx" event={"ID":"57a5bec4-f1ad-45e6-9045-dea45abd7ccf","Type":"ContainerDied","Data":"084a563ed6797acf0fbefacfc1ea74e814569cde129b7960340f7985e70e466c"} Feb 19 10:37:17 crc kubenswrapper[4811]: I0219 10:37:17.772702 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="084a563ed6797acf0fbefacfc1ea74e814569cde129b7960340f7985e70e466c" Feb 19 10:37:17 crc kubenswrapper[4811]: I0219 10:37:17.772708 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qwdvx" Feb 19 10:37:17 crc kubenswrapper[4811]: I0219 10:37:17.854652 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-88tdb"] Feb 19 10:37:17 crc kubenswrapper[4811]: E0219 10:37:17.855333 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57a5bec4-f1ad-45e6-9045-dea45abd7ccf" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 19 10:37:17 crc kubenswrapper[4811]: I0219 10:37:17.855418 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="57a5bec4-f1ad-45e6-9045-dea45abd7ccf" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 19 10:37:17 crc kubenswrapper[4811]: I0219 10:37:17.855725 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="57a5bec4-f1ad-45e6-9045-dea45abd7ccf" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 19 10:37:17 crc kubenswrapper[4811]: I0219 10:37:17.856597 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-88tdb" Feb 19 10:37:17 crc kubenswrapper[4811]: I0219 10:37:17.859078 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wtknk" Feb 19 10:37:17 crc kubenswrapper[4811]: I0219 10:37:17.859383 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:37:17 crc kubenswrapper[4811]: I0219 10:37:17.860011 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:37:17 crc kubenswrapper[4811]: I0219 10:37:17.860137 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:37:17 crc kubenswrapper[4811]: I0219 10:37:17.880995 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-88tdb"] Feb 19 10:37:17 crc kubenswrapper[4811]: I0219 10:37:17.969684 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ddf9ccc5-ba24-4253-93d5-1c2024978965-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-88tdb\" (UID: \"ddf9ccc5-ba24-4253-93d5-1c2024978965\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-88tdb" Feb 19 10:37:17 crc kubenswrapper[4811]: I0219 10:37:17.969800 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q88t5\" (UniqueName: \"kubernetes.io/projected/ddf9ccc5-ba24-4253-93d5-1c2024978965-kube-api-access-q88t5\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-88tdb\" (UID: \"ddf9ccc5-ba24-4253-93d5-1c2024978965\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-88tdb" Feb 19 10:37:17 crc kubenswrapper[4811]: I0219 10:37:17.969893 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ddf9ccc5-ba24-4253-93d5-1c2024978965-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-88tdb\" (UID: \"ddf9ccc5-ba24-4253-93d5-1c2024978965\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-88tdb" Feb 19 10:37:18 crc kubenswrapper[4811]: I0219 10:37:18.071207 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ddf9ccc5-ba24-4253-93d5-1c2024978965-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-88tdb\" (UID: \"ddf9ccc5-ba24-4253-93d5-1c2024978965\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-88tdb" Feb 19 10:37:18 crc kubenswrapper[4811]: I0219 10:37:18.071323 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q88t5\" (UniqueName: \"kubernetes.io/projected/ddf9ccc5-ba24-4253-93d5-1c2024978965-kube-api-access-q88t5\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-88tdb\" (UID: \"ddf9ccc5-ba24-4253-93d5-1c2024978965\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-88tdb" Feb 19 10:37:18 crc kubenswrapper[4811]: I0219 10:37:18.071396 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ddf9ccc5-ba24-4253-93d5-1c2024978965-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-88tdb\" (UID: \"ddf9ccc5-ba24-4253-93d5-1c2024978965\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-88tdb" Feb 19 10:37:18 crc kubenswrapper[4811]: I0219 10:37:18.078120 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ddf9ccc5-ba24-4253-93d5-1c2024978965-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-88tdb\" (UID: \"ddf9ccc5-ba24-4253-93d5-1c2024978965\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-88tdb" Feb 19 10:37:18 crc kubenswrapper[4811]: I0219 10:37:18.078172 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ddf9ccc5-ba24-4253-93d5-1c2024978965-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-88tdb\" (UID: \"ddf9ccc5-ba24-4253-93d5-1c2024978965\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-88tdb" Feb 19 10:37:18 crc kubenswrapper[4811]: I0219 10:37:18.087245 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q88t5\" (UniqueName: \"kubernetes.io/projected/ddf9ccc5-ba24-4253-93d5-1c2024978965-kube-api-access-q88t5\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-88tdb\" (UID: \"ddf9ccc5-ba24-4253-93d5-1c2024978965\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-88tdb" Feb 19 10:37:18 crc kubenswrapper[4811]: I0219 10:37:18.179772 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-88tdb" Feb 19 10:37:18 crc kubenswrapper[4811]: I0219 10:37:18.735813 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-88tdb"] Feb 19 10:37:18 crc kubenswrapper[4811]: I0219 10:37:18.783261 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-88tdb" event={"ID":"ddf9ccc5-ba24-4253-93d5-1c2024978965","Type":"ContainerStarted","Data":"6e52b6bea7d882c7f6f628e4e3ba1ccdb8b773a62844163222760987587379a2"} Feb 19 10:37:19 crc kubenswrapper[4811]: I0219 10:37:19.796268 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-88tdb" event={"ID":"ddf9ccc5-ba24-4253-93d5-1c2024978965","Type":"ContainerStarted","Data":"5333f3b380e7b82651be54060c947f2cbd435a4b0e2be840d155d6e29076e799"} Feb 19 10:37:19 crc kubenswrapper[4811]: I0219 10:37:19.814588 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-88tdb" podStartSLOduration=2.3795552300000002 podStartE2EDuration="2.814565721s" podCreationTimestamp="2026-02-19 10:37:17 +0000 UTC" firstStartedPulling="2026-02-19 10:37:18.732533608 +0000 UTC m=+1727.258998294" lastFinishedPulling="2026-02-19 10:37:19.167544099 +0000 UTC m=+1727.694008785" observedRunningTime="2026-02-19 10:37:19.80981877 +0000 UTC m=+1728.336283456" watchObservedRunningTime="2026-02-19 10:37:19.814565721 +0000 UTC m=+1728.341030407" Feb 19 10:37:30 crc kubenswrapper[4811]: I0219 10:37:30.050493 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-8skw7"] Feb 19 10:37:30 crc kubenswrapper[4811]: I0219 10:37:30.061699 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-8skw7"] Feb 19 10:37:30 crc kubenswrapper[4811]: I0219 10:37:30.597921 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c" path="/var/lib/kubelet/pods/1f3aa0de-ef5c-4148-97c5-3ef8ebcb1d7c/volumes" Feb 19 10:37:32 crc kubenswrapper[4811]: I0219 10:37:32.594080 4811 scope.go:117] "RemoveContainer" containerID="3963827d2c3359bad8b876f1ddce43e835b48fcca50fea9ad20004f5e029bba9" Feb 19 10:37:32 crc kubenswrapper[4811]: E0219 10:37:32.594970 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:37:39 crc kubenswrapper[4811]: I0219 10:37:39.041424 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-928sb"] Feb 19 10:37:39 crc kubenswrapper[4811]: I0219 10:37:39.051818 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-928sb"] Feb 19 10:37:40 crc kubenswrapper[4811]: I0219 10:37:40.595792 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f091a50a-3df7-47c9-84a9-02c074138c7b" path="/var/lib/kubelet/pods/f091a50a-3df7-47c9-84a9-02c074138c7b/volumes" Feb 19 10:37:44 crc kubenswrapper[4811]: I0219 10:37:44.585181 4811 scope.go:117] "RemoveContainer" containerID="3963827d2c3359bad8b876f1ddce43e835b48fcca50fea9ad20004f5e029bba9" Feb 19 10:37:44 crc kubenswrapper[4811]: E0219 10:37:44.586121 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:37:46 crc kubenswrapper[4811]: I0219 10:37:46.045399 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-kdq88"] Feb 19 10:37:46 crc kubenswrapper[4811]: I0219 10:37:46.058496 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-kdq88"] Feb 19 10:37:46 crc kubenswrapper[4811]: I0219 10:37:46.598499 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81ad234d-f37a-45e3-9b76-c45aabd896ae" path="/var/lib/kubelet/pods/81ad234d-f37a-45e3-9b76-c45aabd896ae/volumes" Feb 19 10:37:47 crc kubenswrapper[4811]: I0219 10:37:47.989427 4811 scope.go:117] "RemoveContainer" containerID="3609919436cad504e5c0128520065daccc34908de46da06a7e7bd72aa75c6b98" Feb 19 10:37:48 crc kubenswrapper[4811]: I0219 10:37:48.037113 4811 scope.go:117] "RemoveContainer" containerID="6d889ee7739938850441777cdaca73b11578444dfbefde99c16ebfef6aa5eb5b" Feb 19 10:37:48 crc kubenswrapper[4811]: I0219 10:37:48.090945 4811 scope.go:117] "RemoveContainer" containerID="9263bd647429f2485b27f75564f6890108b8a719d0155ef66514496a3e315aa1" Feb 19 10:37:48 crc kubenswrapper[4811]: I0219 10:37:48.136181 4811 scope.go:117] "RemoveContainer" containerID="6802d851187ba13a812916ff101da5888a68bf72b1ff5782bef294df6f713955" Feb 19 10:37:48 crc kubenswrapper[4811]: I0219 10:37:48.244027 4811 scope.go:117] "RemoveContainer" containerID="5c34e6191dc38060994eaf80a5fdcc31ae900aa69bd786bb66405cf07310918b" Feb 19 10:37:48 crc kubenswrapper[4811]: I0219 10:37:48.273275 4811 scope.go:117] "RemoveContainer" containerID="8cbdb804e3ecadb37f188b25f92c263fc49bebd42bd0e8b38cb234c97a2a2529" Feb 19 10:37:48 crc kubenswrapper[4811]: I0219 10:37:48.340587 4811 scope.go:117] "RemoveContainer" containerID="5711d6fc343ac0054ed86ab6c1aab0d21d8fe8fb6f37d14cb38fada227c0f52c" Feb 19 10:37:48 crc kubenswrapper[4811]: I0219 10:37:48.362530 4811 scope.go:117] "RemoveContainer" containerID="ca739c4be4dd27c9d7d3a1a51bcea7c7eb0729f58f268e5111137f37230fa9f8" Feb 19 10:37:48 crc kubenswrapper[4811]: I0219 10:37:48.388009 4811 scope.go:117] "RemoveContainer" containerID="3ebe304eba92b1c15fdd3e06679060c5a4e36e380b5c3c66b49a6e27eb8abe44" Feb 19 10:37:48 crc kubenswrapper[4811]: I0219 10:37:48.429345 4811 scope.go:117] "RemoveContainer" containerID="11d9e5d0cb4f616ae262209e47e6a4b07b90751f3880e3a7cc497879bc440c97" Feb 19 10:37:57 crc kubenswrapper[4811]: I0219 10:37:57.040834 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-rqgzq"] Feb 19 10:37:57 crc kubenswrapper[4811]: I0219 10:37:57.054032 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-xggvg"] Feb 19 10:37:57 crc kubenswrapper[4811]: I0219 10:37:57.066941 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-xggvg"] Feb 19 10:37:57 crc kubenswrapper[4811]: I0219 10:37:57.077785 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-rqgzq"] Feb 19 10:37:57 crc kubenswrapper[4811]: I0219 10:37:57.583776 4811 scope.go:117] "RemoveContainer" containerID="3963827d2c3359bad8b876f1ddce43e835b48fcca50fea9ad20004f5e029bba9" Feb 19 10:37:57 crc kubenswrapper[4811]: E0219 10:37:57.584238 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:37:58 crc kubenswrapper[4811]: I0219 10:37:58.609988 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9741f82c-1dce-4e95-9442-c424bcf8927d" path="/var/lib/kubelet/pods/9741f82c-1dce-4e95-9442-c424bcf8927d/volumes" Feb 19 10:37:58 crc kubenswrapper[4811]: I0219 10:37:58.613507 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d369ea05-6a36-44fd-bdd0-26eead9ba16d" path="/var/lib/kubelet/pods/d369ea05-6a36-44fd-bdd0-26eead9ba16d/volumes" Feb 19 10:38:08 crc kubenswrapper[4811]: I0219 10:38:08.584136 4811 scope.go:117] "RemoveContainer" containerID="3963827d2c3359bad8b876f1ddce43e835b48fcca50fea9ad20004f5e029bba9" Feb 19 10:38:08 crc kubenswrapper[4811]: E0219 10:38:08.585347 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:38:21 crc kubenswrapper[4811]: I0219 10:38:21.585070 4811 scope.go:117] "RemoveContainer" containerID="3963827d2c3359bad8b876f1ddce43e835b48fcca50fea9ad20004f5e029bba9" Feb 19 10:38:21 crc kubenswrapper[4811]: E0219 10:38:21.586549 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:38:28 crc kubenswrapper[4811]: I0219 10:38:28.495669 4811 generic.go:334] "Generic (PLEG): container finished" podID="ddf9ccc5-ba24-4253-93d5-1c2024978965" containerID="5333f3b380e7b82651be54060c947f2cbd435a4b0e2be840d155d6e29076e799" exitCode=0 Feb 19 10:38:28 crc kubenswrapper[4811]: I0219 10:38:28.495757 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-88tdb" event={"ID":"ddf9ccc5-ba24-4253-93d5-1c2024978965","Type":"ContainerDied","Data":"5333f3b380e7b82651be54060c947f2cbd435a4b0e2be840d155d6e29076e799"} Feb 19 10:38:29 crc kubenswrapper[4811]: I0219 10:38:29.924841 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-88tdb" Feb 19 10:38:30 crc kubenswrapper[4811]: I0219 10:38:30.077911 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ddf9ccc5-ba24-4253-93d5-1c2024978965-ssh-key-openstack-edpm-ipam\") pod \"ddf9ccc5-ba24-4253-93d5-1c2024978965\" (UID: \"ddf9ccc5-ba24-4253-93d5-1c2024978965\") " Feb 19 10:38:30 crc kubenswrapper[4811]: I0219 10:38:30.078562 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ddf9ccc5-ba24-4253-93d5-1c2024978965-inventory\") pod \"ddf9ccc5-ba24-4253-93d5-1c2024978965\" (UID: \"ddf9ccc5-ba24-4253-93d5-1c2024978965\") " Feb 19 10:38:30 crc kubenswrapper[4811]: I0219 10:38:30.078785 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q88t5\" (UniqueName: \"kubernetes.io/projected/ddf9ccc5-ba24-4253-93d5-1c2024978965-kube-api-access-q88t5\") pod \"ddf9ccc5-ba24-4253-93d5-1c2024978965\" (UID: \"ddf9ccc5-ba24-4253-93d5-1c2024978965\") " Feb 19 10:38:30 crc kubenswrapper[4811]: I0219 10:38:30.083626 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddf9ccc5-ba24-4253-93d5-1c2024978965-kube-api-access-q88t5" (OuterVolumeSpecName: "kube-api-access-q88t5") pod "ddf9ccc5-ba24-4253-93d5-1c2024978965" (UID: "ddf9ccc5-ba24-4253-93d5-1c2024978965"). InnerVolumeSpecName "kube-api-access-q88t5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:38:30 crc kubenswrapper[4811]: I0219 10:38:30.106033 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddf9ccc5-ba24-4253-93d5-1c2024978965-inventory" (OuterVolumeSpecName: "inventory") pod "ddf9ccc5-ba24-4253-93d5-1c2024978965" (UID: "ddf9ccc5-ba24-4253-93d5-1c2024978965"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:38:30 crc kubenswrapper[4811]: I0219 10:38:30.117817 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddf9ccc5-ba24-4253-93d5-1c2024978965-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ddf9ccc5-ba24-4253-93d5-1c2024978965" (UID: "ddf9ccc5-ba24-4253-93d5-1c2024978965"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:38:30 crc kubenswrapper[4811]: I0219 10:38:30.180800 4811 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ddf9ccc5-ba24-4253-93d5-1c2024978965-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:38:30 crc kubenswrapper[4811]: I0219 10:38:30.180835 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q88t5\" (UniqueName: \"kubernetes.io/projected/ddf9ccc5-ba24-4253-93d5-1c2024978965-kube-api-access-q88t5\") on node \"crc\" DevicePath \"\"" Feb 19 10:38:30 crc kubenswrapper[4811]: I0219 10:38:30.180850 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ddf9ccc5-ba24-4253-93d5-1c2024978965-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:38:30 crc kubenswrapper[4811]: I0219 10:38:30.521174 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-88tdb" event={"ID":"ddf9ccc5-ba24-4253-93d5-1c2024978965","Type":"ContainerDied","Data":"6e52b6bea7d882c7f6f628e4e3ba1ccdb8b773a62844163222760987587379a2"} Feb 19 10:38:30 crc kubenswrapper[4811]: I0219 10:38:30.521218 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e52b6bea7d882c7f6f628e4e3ba1ccdb8b773a62844163222760987587379a2" Feb 19 10:38:30 crc kubenswrapper[4811]: I0219 10:38:30.521575 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-88tdb" Feb 19 10:38:30 crc kubenswrapper[4811]: I0219 10:38:30.619984 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxwxz"] Feb 19 10:38:30 crc kubenswrapper[4811]: E0219 10:38:30.620641 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddf9ccc5-ba24-4253-93d5-1c2024978965" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 19 10:38:30 crc kubenswrapper[4811]: I0219 10:38:30.620668 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddf9ccc5-ba24-4253-93d5-1c2024978965" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 19 10:38:30 crc kubenswrapper[4811]: I0219 10:38:30.620943 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddf9ccc5-ba24-4253-93d5-1c2024978965" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 19 10:38:30 crc kubenswrapper[4811]: I0219 10:38:30.621831 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxwxz" Feb 19 10:38:30 crc kubenswrapper[4811]: I0219 10:38:30.625501 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wtknk" Feb 19 10:38:30 crc kubenswrapper[4811]: I0219 10:38:30.625932 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:38:30 crc kubenswrapper[4811]: I0219 10:38:30.626287 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:38:30 crc kubenswrapper[4811]: I0219 10:38:30.626895 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:38:30 crc kubenswrapper[4811]: I0219 10:38:30.629414 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxwxz"] Feb 19 10:38:30 crc kubenswrapper[4811]: I0219 10:38:30.795831 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxghd\" (UniqueName: \"kubernetes.io/projected/6df41748-d8bb-49c9-aa6f-403827039ae7-kube-api-access-vxghd\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vxwxz\" (UID: \"6df41748-d8bb-49c9-aa6f-403827039ae7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxwxz" Feb 19 10:38:30 crc kubenswrapper[4811]: I0219 10:38:30.795938 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6df41748-d8bb-49c9-aa6f-403827039ae7-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vxwxz\" (UID: \"6df41748-d8bb-49c9-aa6f-403827039ae7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxwxz" Feb 19 10:38:30 crc kubenswrapper[4811]: I0219 10:38:30.796078 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6df41748-d8bb-49c9-aa6f-403827039ae7-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vxwxz\" (UID: \"6df41748-d8bb-49c9-aa6f-403827039ae7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxwxz" Feb 19 10:38:30 crc kubenswrapper[4811]: I0219 10:38:30.897932 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6df41748-d8bb-49c9-aa6f-403827039ae7-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vxwxz\" (UID: \"6df41748-d8bb-49c9-aa6f-403827039ae7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxwxz" Feb 19 10:38:30 crc kubenswrapper[4811]: I0219 10:38:30.898099 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxghd\" (UniqueName: \"kubernetes.io/projected/6df41748-d8bb-49c9-aa6f-403827039ae7-kube-api-access-vxghd\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vxwxz\" (UID: \"6df41748-d8bb-49c9-aa6f-403827039ae7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxwxz" Feb 19 10:38:30 crc kubenswrapper[4811]: I0219 10:38:30.898153 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6df41748-d8bb-49c9-aa6f-403827039ae7-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vxwxz\" (UID: \"6df41748-d8bb-49c9-aa6f-403827039ae7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxwxz" Feb 19 10:38:30 crc kubenswrapper[4811]: I0219 10:38:30.908619 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6df41748-d8bb-49c9-aa6f-403827039ae7-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vxwxz\" (UID: \"6df41748-d8bb-49c9-aa6f-403827039ae7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxwxz" Feb 19 10:38:30 crc kubenswrapper[4811]: I0219 10:38:30.908789 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6df41748-d8bb-49c9-aa6f-403827039ae7-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vxwxz\" (UID: \"6df41748-d8bb-49c9-aa6f-403827039ae7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxwxz" Feb 19 10:38:30 crc kubenswrapper[4811]: I0219 10:38:30.932143 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxghd\" (UniqueName: \"kubernetes.io/projected/6df41748-d8bb-49c9-aa6f-403827039ae7-kube-api-access-vxghd\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vxwxz\" (UID: \"6df41748-d8bb-49c9-aa6f-403827039ae7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxwxz" Feb 19 10:38:30 crc kubenswrapper[4811]: I0219 10:38:30.944808 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxwxz" Feb 19 10:38:31 crc kubenswrapper[4811]: I0219 10:38:31.510882 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxwxz"] Feb 19 10:38:31 crc kubenswrapper[4811]: I0219 10:38:31.531814 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxwxz" event={"ID":"6df41748-d8bb-49c9-aa6f-403827039ae7","Type":"ContainerStarted","Data":"b80cbe62c01c6e09c8d09e7d3310aea8265b31d216c55e61ebe9386af927a443"} Feb 19 10:38:32 crc kubenswrapper[4811]: I0219 10:38:32.543774 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxwxz" event={"ID":"6df41748-d8bb-49c9-aa6f-403827039ae7","Type":"ContainerStarted","Data":"e0ebf9a36abfaf10fff98039db9cf998c876c150de1e17454d3fdd5633a6f41f"} Feb 19 10:38:32 crc kubenswrapper[4811]: I0219 10:38:32.565502 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxwxz" podStartSLOduration=2.108786589 podStartE2EDuration="2.565483384s" podCreationTimestamp="2026-02-19 10:38:30 +0000 UTC" firstStartedPulling="2026-02-19 10:38:31.523323578 +0000 UTC m=+1800.049788264" lastFinishedPulling="2026-02-19 10:38:31.980020363 +0000 UTC m=+1800.506485059" observedRunningTime="2026-02-19 10:38:32.562727364 +0000 UTC m=+1801.089192040" watchObservedRunningTime="2026-02-19 10:38:32.565483384 +0000 UTC m=+1801.091948070" Feb 19 10:38:34 crc kubenswrapper[4811]: I0219 10:38:34.583756 4811 scope.go:117] "RemoveContainer" containerID="3963827d2c3359bad8b876f1ddce43e835b48fcca50fea9ad20004f5e029bba9" Feb 19 10:38:34 crc kubenswrapper[4811]: E0219 10:38:34.584349 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:38:36 crc kubenswrapper[4811]: I0219 10:38:36.589101 4811 generic.go:334] "Generic (PLEG): container finished" podID="6df41748-d8bb-49c9-aa6f-403827039ae7" containerID="e0ebf9a36abfaf10fff98039db9cf998c876c150de1e17454d3fdd5633a6f41f" exitCode=0 Feb 19 10:38:36 crc kubenswrapper[4811]: I0219 10:38:36.597659 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxwxz" event={"ID":"6df41748-d8bb-49c9-aa6f-403827039ae7","Type":"ContainerDied","Data":"e0ebf9a36abfaf10fff98039db9cf998c876c150de1e17454d3fdd5633a6f41f"} Feb 19 10:38:38 crc kubenswrapper[4811]: I0219 10:38:38.062747 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxwxz" Feb 19 10:38:38 crc kubenswrapper[4811]: I0219 10:38:38.154653 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6df41748-d8bb-49c9-aa6f-403827039ae7-ssh-key-openstack-edpm-ipam\") pod \"6df41748-d8bb-49c9-aa6f-403827039ae7\" (UID: \"6df41748-d8bb-49c9-aa6f-403827039ae7\") " Feb 19 10:38:38 crc kubenswrapper[4811]: I0219 10:38:38.154720 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6df41748-d8bb-49c9-aa6f-403827039ae7-inventory\") pod \"6df41748-d8bb-49c9-aa6f-403827039ae7\" (UID: \"6df41748-d8bb-49c9-aa6f-403827039ae7\") " Feb 19 10:38:38 crc kubenswrapper[4811]: I0219 10:38:38.154993 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxghd\" (UniqueName: \"kubernetes.io/projected/6df41748-d8bb-49c9-aa6f-403827039ae7-kube-api-access-vxghd\") pod \"6df41748-d8bb-49c9-aa6f-403827039ae7\" (UID: \"6df41748-d8bb-49c9-aa6f-403827039ae7\") " Feb 19 10:38:38 crc kubenswrapper[4811]: I0219 10:38:38.160681 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6df41748-d8bb-49c9-aa6f-403827039ae7-kube-api-access-vxghd" (OuterVolumeSpecName: "kube-api-access-vxghd") pod "6df41748-d8bb-49c9-aa6f-403827039ae7" (UID: "6df41748-d8bb-49c9-aa6f-403827039ae7"). InnerVolumeSpecName "kube-api-access-vxghd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:38:38 crc kubenswrapper[4811]: I0219 10:38:38.188976 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6df41748-d8bb-49c9-aa6f-403827039ae7-inventory" (OuterVolumeSpecName: "inventory") pod "6df41748-d8bb-49c9-aa6f-403827039ae7" (UID: "6df41748-d8bb-49c9-aa6f-403827039ae7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:38:38 crc kubenswrapper[4811]: I0219 10:38:38.191046 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6df41748-d8bb-49c9-aa6f-403827039ae7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6df41748-d8bb-49c9-aa6f-403827039ae7" (UID: "6df41748-d8bb-49c9-aa6f-403827039ae7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:38:38 crc kubenswrapper[4811]: I0219 10:38:38.257338 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxghd\" (UniqueName: \"kubernetes.io/projected/6df41748-d8bb-49c9-aa6f-403827039ae7-kube-api-access-vxghd\") on node \"crc\" DevicePath \"\"" Feb 19 10:38:38 crc kubenswrapper[4811]: I0219 10:38:38.257688 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6df41748-d8bb-49c9-aa6f-403827039ae7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:38:38 crc kubenswrapper[4811]: I0219 10:38:38.257706 4811 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6df41748-d8bb-49c9-aa6f-403827039ae7-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:38:38 crc kubenswrapper[4811]: I0219 10:38:38.612610 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxwxz" event={"ID":"6df41748-d8bb-49c9-aa6f-403827039ae7","Type":"ContainerDied","Data":"b80cbe62c01c6e09c8d09e7d3310aea8265b31d216c55e61ebe9386af927a443"} Feb 19 10:38:38 crc kubenswrapper[4811]: I0219 10:38:38.612649 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vxwxz" Feb 19 10:38:38 crc kubenswrapper[4811]: I0219 10:38:38.612655 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b80cbe62c01c6e09c8d09e7d3310aea8265b31d216c55e61ebe9386af927a443" Feb 19 10:38:38 crc kubenswrapper[4811]: I0219 10:38:38.684483 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-kg62s"] Feb 19 10:38:38 crc kubenswrapper[4811]: E0219 10:38:38.684937 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6df41748-d8bb-49c9-aa6f-403827039ae7" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 19 10:38:38 crc kubenswrapper[4811]: I0219 10:38:38.684961 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="6df41748-d8bb-49c9-aa6f-403827039ae7" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 19 10:38:38 crc kubenswrapper[4811]: I0219 10:38:38.685232 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="6df41748-d8bb-49c9-aa6f-403827039ae7" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 19 10:38:38 crc kubenswrapper[4811]: I0219 10:38:38.686071 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kg62s" Feb 19 10:38:38 crc kubenswrapper[4811]: I0219 10:38:38.689222 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wtknk" Feb 19 10:38:38 crc kubenswrapper[4811]: I0219 10:38:38.690069 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:38:38 crc kubenswrapper[4811]: I0219 10:38:38.690219 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:38:38 crc kubenswrapper[4811]: I0219 10:38:38.697159 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:38:38 crc kubenswrapper[4811]: I0219 10:38:38.700546 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-kg62s"] Feb 19 10:38:38 crc kubenswrapper[4811]: I0219 10:38:38.871958 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4mzt\" (UniqueName: \"kubernetes.io/projected/4ed42933-309e-4fe0-874b-7ec5fb67d807-kube-api-access-k4mzt\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kg62s\" (UID: \"4ed42933-309e-4fe0-874b-7ec5fb67d807\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kg62s" Feb 19 10:38:38 crc kubenswrapper[4811]: I0219 10:38:38.872022 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ed42933-309e-4fe0-874b-7ec5fb67d807-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kg62s\" (UID: \"4ed42933-309e-4fe0-874b-7ec5fb67d807\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kg62s" Feb 19 10:38:38 crc kubenswrapper[4811]: I0219 10:38:38.872129 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ed42933-309e-4fe0-874b-7ec5fb67d807-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kg62s\" (UID: \"4ed42933-309e-4fe0-874b-7ec5fb67d807\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kg62s" Feb 19 10:38:38 crc kubenswrapper[4811]: I0219 10:38:38.973704 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4mzt\" (UniqueName: \"kubernetes.io/projected/4ed42933-309e-4fe0-874b-7ec5fb67d807-kube-api-access-k4mzt\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kg62s\" (UID: \"4ed42933-309e-4fe0-874b-7ec5fb67d807\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kg62s" Feb 19 10:38:38 crc kubenswrapper[4811]: I0219 10:38:38.973775 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ed42933-309e-4fe0-874b-7ec5fb67d807-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kg62s\" (UID: \"4ed42933-309e-4fe0-874b-7ec5fb67d807\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kg62s" Feb 19 10:38:38 crc kubenswrapper[4811]: I0219 10:38:38.973821 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ed42933-309e-4fe0-874b-7ec5fb67d807-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kg62s\" (UID: \"4ed42933-309e-4fe0-874b-7ec5fb67d807\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kg62s" Feb 19 10:38:38 crc kubenswrapper[4811]: I0219 10:38:38.981144 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ed42933-309e-4fe0-874b-7ec5fb67d807-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kg62s\" (UID: \"4ed42933-309e-4fe0-874b-7ec5fb67d807\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kg62s" Feb 19 10:38:38 crc kubenswrapper[4811]: I0219 10:38:38.985253 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ed42933-309e-4fe0-874b-7ec5fb67d807-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kg62s\" (UID: \"4ed42933-309e-4fe0-874b-7ec5fb67d807\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kg62s" Feb 19 10:38:38 crc kubenswrapper[4811]: I0219 10:38:38.998124 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4mzt\" (UniqueName: \"kubernetes.io/projected/4ed42933-309e-4fe0-874b-7ec5fb67d807-kube-api-access-k4mzt\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kg62s\" (UID: \"4ed42933-309e-4fe0-874b-7ec5fb67d807\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kg62s" Feb 19 10:38:39 crc kubenswrapper[4811]: I0219 10:38:39.006974 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kg62s" Feb 19 10:38:39 crc kubenswrapper[4811]: I0219 10:38:39.538890 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-kg62s"] Feb 19 10:38:39 crc kubenswrapper[4811]: I0219 10:38:39.625275 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kg62s" event={"ID":"4ed42933-309e-4fe0-874b-7ec5fb67d807","Type":"ContainerStarted","Data":"73cfa681be904d4e9d94190cd88683ff63a087d3c49e6f40079bab78415f7e3d"} Feb 19 10:38:40 crc kubenswrapper[4811]: I0219 10:38:40.635228 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kg62s" event={"ID":"4ed42933-309e-4fe0-874b-7ec5fb67d807","Type":"ContainerStarted","Data":"f5c203a8367dc1e528edf4eddc868cd2b7ed0f44d09971d735b07dca09e79555"} Feb 19 10:38:41 crc kubenswrapper[4811]: I0219 10:38:41.042624 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kg62s" podStartSLOduration=2.604055748 podStartE2EDuration="3.042605339s" podCreationTimestamp="2026-02-19 10:38:38 +0000 UTC" firstStartedPulling="2026-02-19 10:38:39.54143643 +0000 UTC m=+1808.067901116" lastFinishedPulling="2026-02-19 10:38:39.979986021 +0000 UTC m=+1808.506450707" observedRunningTime="2026-02-19 10:38:40.659349358 +0000 UTC m=+1809.185814044" watchObservedRunningTime="2026-02-19 10:38:41.042605339 +0000 UTC m=+1809.569070025" Feb 19 10:38:41 crc kubenswrapper[4811]: I0219 10:38:41.048662 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-kpczv"] Feb 19 10:38:41 crc kubenswrapper[4811]: I0219 10:38:41.061550 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-kpczv"] Feb 19 10:38:41 crc kubenswrapper[4811]: I0219 10:38:41.074206 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-1d9e-account-create-update-rb5mq"] Feb 19 10:38:41 crc kubenswrapper[4811]: I0219 10:38:41.086635 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-1d9e-account-create-update-rb5mq"] Feb 19 10:38:42 crc kubenswrapper[4811]: I0219 10:38:42.600891 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3dfb180a-094f-4716-b7b8-a50cb4841cc9" path="/var/lib/kubelet/pods/3dfb180a-094f-4716-b7b8-a50cb4841cc9/volumes" Feb 19 10:38:42 crc kubenswrapper[4811]: I0219 10:38:42.602647 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78de4cdd-5be9-44ae-b1b9-783bf1279fae" path="/var/lib/kubelet/pods/78de4cdd-5be9-44ae-b1b9-783bf1279fae/volumes" Feb 19 10:38:43 crc kubenswrapper[4811]: I0219 10:38:43.029852 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-q6s4v"] Feb 19 10:38:43 crc kubenswrapper[4811]: I0219 10:38:43.038110 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-3102-account-create-update-qk57q"] Feb 19 10:38:43 crc kubenswrapper[4811]: I0219 10:38:43.048515 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-2552-account-create-update-7qdk6"] Feb 19 10:38:43 crc kubenswrapper[4811]: I0219 10:38:43.062562 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-3102-account-create-update-qk57q"] Feb 19 10:38:43 crc kubenswrapper[4811]: I0219 10:38:43.070212 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-q6s4v"] Feb 19 10:38:43 crc kubenswrapper[4811]: I0219 10:38:43.079240 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-tc999"] Feb 19 10:38:43 crc kubenswrapper[4811]: I0219 10:38:43.087030 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-2552-account-create-update-7qdk6"] Feb 19 10:38:43 crc kubenswrapper[4811]: I0219 10:38:43.095041 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-tc999"] Feb 19 10:38:44 crc kubenswrapper[4811]: I0219 10:38:44.597738 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09d3635d-5cd3-4cbb-a214-5777c6b95a3e" path="/var/lib/kubelet/pods/09d3635d-5cd3-4cbb-a214-5777c6b95a3e/volumes" Feb 19 10:38:44 crc kubenswrapper[4811]: I0219 10:38:44.598705 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="409ceda0-e9b0-4d91-8a2b-827d68b00483" path="/var/lib/kubelet/pods/409ceda0-e9b0-4d91-8a2b-827d68b00483/volumes" Feb 19 10:38:44 crc kubenswrapper[4811]: I0219 10:38:44.599566 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5aea467f-47c2-470e-a5db-2c3146a21658" path="/var/lib/kubelet/pods/5aea467f-47c2-470e-a5db-2c3146a21658/volumes" Feb 19 10:38:44 crc kubenswrapper[4811]: I0219 10:38:44.600238 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9be9489-aa33-45fd-881f-7b8da8a5022c" path="/var/lib/kubelet/pods/a9be9489-aa33-45fd-881f-7b8da8a5022c/volumes" Feb 19 10:38:47 crc kubenswrapper[4811]: I0219 10:38:47.584661 4811 scope.go:117] "RemoveContainer" containerID="3963827d2c3359bad8b876f1ddce43e835b48fcca50fea9ad20004f5e029bba9" Feb 19 10:38:47 crc kubenswrapper[4811]: E0219 10:38:47.584956 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:38:48 crc kubenswrapper[4811]: I0219 10:38:48.658182 4811 scope.go:117] "RemoveContainer" containerID="93af5527cd4fc76efa832e4860f1a0123cc276a291e24bacbeedde19f3726189" Feb 19 10:38:49 crc kubenswrapper[4811]: I0219 10:38:49.208784 4811 scope.go:117] "RemoveContainer" containerID="998e46efff20dedbb0c454556351271f7191e8f95d11e5a4806b9a68c93a20a5" Feb 19 10:38:49 crc kubenswrapper[4811]: I0219 10:38:49.258312 4811 scope.go:117] "RemoveContainer" containerID="5da12c13e5991e34463f734fc0ffe4ed5830c82ae0c3807e259991334539fb85" Feb 19 10:38:49 crc kubenswrapper[4811]: I0219 10:38:49.286196 4811 scope.go:117] "RemoveContainer" containerID="e41874502470ed1c0b31f888db5b94a42c3cf931b16f7ad49055790bbc466251" Feb 19 10:38:49 crc kubenswrapper[4811]: I0219 10:38:49.351647 4811 scope.go:117] "RemoveContainer" containerID="37fa49774aaccfdcd39825cdbed463868d77981cdaba6d6c402a16eecb98f0d6" Feb 19 10:38:49 crc kubenswrapper[4811]: I0219 10:38:49.382087 4811 scope.go:117] "RemoveContainer" containerID="ffe2e1c1ace860741ed77b73dd787f3888c036f7dfcd56a01f34b64e2677be14" Feb 19 10:38:49 crc kubenswrapper[4811]: I0219 10:38:49.434108 4811 scope.go:117] "RemoveContainer" containerID="faa277203517eba149841671b72a8f55ca8e597f695e214ea0ea08f863819e97" Feb 19 10:38:49 crc kubenswrapper[4811]: I0219 10:38:49.460771 4811 scope.go:117] "RemoveContainer" containerID="a9bd8448d14e1f048becd22b643686d9e8f43c2482a0e5120d351d5a1da45299" Feb 19 10:38:58 crc kubenswrapper[4811]: I0219 10:38:58.583833 4811 scope.go:117] "RemoveContainer" containerID="3963827d2c3359bad8b876f1ddce43e835b48fcca50fea9ad20004f5e029bba9" Feb 19 10:38:58 crc kubenswrapper[4811]: E0219 10:38:58.584701 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:39:09 crc kubenswrapper[4811]: I0219 10:39:09.583267 4811 scope.go:117] "RemoveContainer" containerID="3963827d2c3359bad8b876f1ddce43e835b48fcca50fea9ad20004f5e029bba9" Feb 19 10:39:09 crc kubenswrapper[4811]: E0219 10:39:09.584240 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:39:18 crc kubenswrapper[4811]: I0219 10:39:18.015431 4811 generic.go:334] "Generic (PLEG): container finished" podID="4ed42933-309e-4fe0-874b-7ec5fb67d807" containerID="f5c203a8367dc1e528edf4eddc868cd2b7ed0f44d09971d735b07dca09e79555" exitCode=0 Feb 19 10:39:18 crc kubenswrapper[4811]: I0219 10:39:18.015606 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kg62s" event={"ID":"4ed42933-309e-4fe0-874b-7ec5fb67d807","Type":"ContainerDied","Data":"f5c203a8367dc1e528edf4eddc868cd2b7ed0f44d09971d735b07dca09e79555"} Feb 19 10:39:18 crc kubenswrapper[4811]: I0219 10:39:18.070019 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-j6nxh"] Feb 19 10:39:18 crc kubenswrapper[4811]: I0219 10:39:18.088970 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-j6nxh"] Feb 19 10:39:18 crc kubenswrapper[4811]: I0219 10:39:18.598324 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28527c23-663e-467f-b7df-90e96a677700" path="/var/lib/kubelet/pods/28527c23-663e-467f-b7df-90e96a677700/volumes" Feb 19 10:39:19 crc kubenswrapper[4811]: I0219 10:39:19.443384 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kg62s" Feb 19 10:39:19 crc kubenswrapper[4811]: I0219 10:39:19.622682 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4mzt\" (UniqueName: \"kubernetes.io/projected/4ed42933-309e-4fe0-874b-7ec5fb67d807-kube-api-access-k4mzt\") pod \"4ed42933-309e-4fe0-874b-7ec5fb67d807\" (UID: \"4ed42933-309e-4fe0-874b-7ec5fb67d807\") " Feb 19 10:39:19 crc kubenswrapper[4811]: I0219 10:39:19.622764 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ed42933-309e-4fe0-874b-7ec5fb67d807-inventory\") pod \"4ed42933-309e-4fe0-874b-7ec5fb67d807\" (UID: \"4ed42933-309e-4fe0-874b-7ec5fb67d807\") " Feb 19 10:39:19 crc kubenswrapper[4811]: I0219 10:39:19.623094 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ed42933-309e-4fe0-874b-7ec5fb67d807-ssh-key-openstack-edpm-ipam\") pod \"4ed42933-309e-4fe0-874b-7ec5fb67d807\" (UID: \"4ed42933-309e-4fe0-874b-7ec5fb67d807\") " Feb 19 10:39:19 crc kubenswrapper[4811]: I0219 10:39:19.645715 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ed42933-309e-4fe0-874b-7ec5fb67d807-kube-api-access-k4mzt" (OuterVolumeSpecName: "kube-api-access-k4mzt") pod "4ed42933-309e-4fe0-874b-7ec5fb67d807" (UID: "4ed42933-309e-4fe0-874b-7ec5fb67d807"). InnerVolumeSpecName "kube-api-access-k4mzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:39:19 crc kubenswrapper[4811]: I0219 10:39:19.660857 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ed42933-309e-4fe0-874b-7ec5fb67d807-inventory" (OuterVolumeSpecName: "inventory") pod "4ed42933-309e-4fe0-874b-7ec5fb67d807" (UID: "4ed42933-309e-4fe0-874b-7ec5fb67d807"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:39:19 crc kubenswrapper[4811]: I0219 10:39:19.663292 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ed42933-309e-4fe0-874b-7ec5fb67d807-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4ed42933-309e-4fe0-874b-7ec5fb67d807" (UID: "4ed42933-309e-4fe0-874b-7ec5fb67d807"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:39:19 crc kubenswrapper[4811]: I0219 10:39:19.725770 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ed42933-309e-4fe0-874b-7ec5fb67d807-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:39:19 crc kubenswrapper[4811]: I0219 10:39:19.725812 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4mzt\" (UniqueName: \"kubernetes.io/projected/4ed42933-309e-4fe0-874b-7ec5fb67d807-kube-api-access-k4mzt\") on node \"crc\" DevicePath \"\"" Feb 19 10:39:19 crc kubenswrapper[4811]: I0219 10:39:19.725824 4811 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ed42933-309e-4fe0-874b-7ec5fb67d807-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:39:20 crc kubenswrapper[4811]: I0219 10:39:20.033086 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kg62s" event={"ID":"4ed42933-309e-4fe0-874b-7ec5fb67d807","Type":"ContainerDied","Data":"73cfa681be904d4e9d94190cd88683ff63a087d3c49e6f40079bab78415f7e3d"} Feb 19 10:39:20 crc kubenswrapper[4811]: I0219 10:39:20.033328 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73cfa681be904d4e9d94190cd88683ff63a087d3c49e6f40079bab78415f7e3d" Feb 19 10:39:20 crc kubenswrapper[4811]: I0219 10:39:20.033289 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kg62s" Feb 19 10:39:20 crc kubenswrapper[4811]: I0219 10:39:20.181914 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6crt"] Feb 19 10:39:20 crc kubenswrapper[4811]: E0219 10:39:20.182325 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ed42933-309e-4fe0-874b-7ec5fb67d807" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 19 10:39:20 crc kubenswrapper[4811]: I0219 10:39:20.182346 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ed42933-309e-4fe0-874b-7ec5fb67d807" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 19 10:39:20 crc kubenswrapper[4811]: I0219 10:39:20.182564 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ed42933-309e-4fe0-874b-7ec5fb67d807" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 19 10:39:20 crc kubenswrapper[4811]: I0219 10:39:20.183171 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6crt" Feb 19 10:39:20 crc kubenswrapper[4811]: I0219 10:39:20.185715 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:39:20 crc kubenswrapper[4811]: I0219 10:39:20.185857 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wtknk" Feb 19 10:39:20 crc kubenswrapper[4811]: I0219 10:39:20.186064 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:39:20 crc kubenswrapper[4811]: I0219 10:39:20.185799 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:39:20 crc kubenswrapper[4811]: I0219 10:39:20.203668 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6crt"] Feb 19 10:39:20 crc kubenswrapper[4811]: I0219 10:39:20.337077 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5xsk\" (UniqueName: \"kubernetes.io/projected/0409deb0-2253-48f7-8f17-3bf3c70088ef-kube-api-access-t5xsk\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s6crt\" (UID: \"0409deb0-2253-48f7-8f17-3bf3c70088ef\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6crt" Feb 19 10:39:20 crc kubenswrapper[4811]: I0219 10:39:20.337588 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0409deb0-2253-48f7-8f17-3bf3c70088ef-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s6crt\" (UID: \"0409deb0-2253-48f7-8f17-3bf3c70088ef\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6crt" Feb 19 10:39:20 crc kubenswrapper[4811]: I0219 10:39:20.337889 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0409deb0-2253-48f7-8f17-3bf3c70088ef-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s6crt\" (UID: \"0409deb0-2253-48f7-8f17-3bf3c70088ef\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6crt" Feb 19 10:39:20 crc kubenswrapper[4811]: I0219 10:39:20.439715 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5xsk\" (UniqueName: \"kubernetes.io/projected/0409deb0-2253-48f7-8f17-3bf3c70088ef-kube-api-access-t5xsk\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s6crt\" (UID: \"0409deb0-2253-48f7-8f17-3bf3c70088ef\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6crt" Feb 19 10:39:20 crc kubenswrapper[4811]: I0219 10:39:20.440809 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0409deb0-2253-48f7-8f17-3bf3c70088ef-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s6crt\" (UID: \"0409deb0-2253-48f7-8f17-3bf3c70088ef\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6crt" Feb 19 10:39:20 crc kubenswrapper[4811]: I0219 10:39:20.440903 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0409deb0-2253-48f7-8f17-3bf3c70088ef-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s6crt\" (UID: \"0409deb0-2253-48f7-8f17-3bf3c70088ef\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6crt" Feb 19 10:39:20 crc kubenswrapper[4811]: I0219 10:39:20.452334 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0409deb0-2253-48f7-8f17-3bf3c70088ef-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s6crt\" (UID: \"0409deb0-2253-48f7-8f17-3bf3c70088ef\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6crt" Feb 19 10:39:20 crc kubenswrapper[4811]: I0219 10:39:20.452952 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0409deb0-2253-48f7-8f17-3bf3c70088ef-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s6crt\" (UID: \"0409deb0-2253-48f7-8f17-3bf3c70088ef\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6crt" Feb 19 10:39:20 crc kubenswrapper[4811]: I0219 10:39:20.465626 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5xsk\" (UniqueName: \"kubernetes.io/projected/0409deb0-2253-48f7-8f17-3bf3c70088ef-kube-api-access-t5xsk\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s6crt\" (UID: \"0409deb0-2253-48f7-8f17-3bf3c70088ef\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6crt" Feb 19 10:39:20 crc kubenswrapper[4811]: I0219 10:39:20.498930 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6crt" Feb 19 10:39:21 crc kubenswrapper[4811]: I0219 10:39:21.084537 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6crt"] Feb 19 10:39:22 crc kubenswrapper[4811]: I0219 10:39:22.062299 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6crt" event={"ID":"0409deb0-2253-48f7-8f17-3bf3c70088ef","Type":"ContainerStarted","Data":"18ba5213ae468937b80cb5424af0b10b6d418ed34069155449f3dc6e1fe34468"} Feb 19 10:39:23 crc kubenswrapper[4811]: I0219 10:39:23.076717 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6crt" event={"ID":"0409deb0-2253-48f7-8f17-3bf3c70088ef","Type":"ContainerStarted","Data":"e0aaef1e3c55c4d6f99359085b216670b92e92a179931758fabba3a146c75058"} Feb 19 10:39:23 crc kubenswrapper[4811]: I0219 10:39:23.112209 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6crt" podStartSLOduration=2.173006054 podStartE2EDuration="3.112183953s" podCreationTimestamp="2026-02-19 10:39:20 +0000 UTC" firstStartedPulling="2026-02-19 10:39:21.104245629 +0000 UTC m=+1849.630710315" lastFinishedPulling="2026-02-19 10:39:22.043423528 +0000 UTC m=+1850.569888214" observedRunningTime="2026-02-19 10:39:23.098634548 +0000 UTC m=+1851.625099244" watchObservedRunningTime="2026-02-19 10:39:23.112183953 +0000 UTC m=+1851.638648649" Feb 19 10:39:23 crc kubenswrapper[4811]: I0219 10:39:23.583612 4811 scope.go:117] "RemoveContainer" containerID="3963827d2c3359bad8b876f1ddce43e835b48fcca50fea9ad20004f5e029bba9" Feb 19 10:39:23 crc kubenswrapper[4811]: E0219 10:39:23.583946 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:39:35 crc kubenswrapper[4811]: I0219 10:39:35.583741 4811 scope.go:117] "RemoveContainer" containerID="3963827d2c3359bad8b876f1ddce43e835b48fcca50fea9ad20004f5e029bba9" Feb 19 10:39:35 crc kubenswrapper[4811]: E0219 10:39:35.584658 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:39:41 crc kubenswrapper[4811]: I0219 10:39:41.042042 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-f4kvz"] Feb 19 10:39:41 crc kubenswrapper[4811]: I0219 10:39:41.051885 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-f4kvz"] Feb 19 10:39:42 crc kubenswrapper[4811]: I0219 10:39:42.045280 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-2472l"] Feb 19 10:39:42 crc kubenswrapper[4811]: I0219 10:39:42.060365 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-2472l"] Feb 19 10:39:42 crc kubenswrapper[4811]: I0219 10:39:42.598701 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04" path="/var/lib/kubelet/pods/1fd0f3b6-9a33-4815-93fe-8d4d60ea5d04/volumes" Feb 19 10:39:42 crc kubenswrapper[4811]: I0219 10:39:42.599438 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2207a190-4ccd-4560-890a-b9d6c5255f6a" path="/var/lib/kubelet/pods/2207a190-4ccd-4560-890a-b9d6c5255f6a/volumes" Feb 19 10:39:48 crc kubenswrapper[4811]: I0219 10:39:48.583551 4811 scope.go:117] "RemoveContainer" containerID="3963827d2c3359bad8b876f1ddce43e835b48fcca50fea9ad20004f5e029bba9" Feb 19 10:39:48 crc kubenswrapper[4811]: E0219 10:39:48.584361 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:39:49 crc kubenswrapper[4811]: I0219 10:39:49.626168 4811 scope.go:117] "RemoveContainer" containerID="d44b7a74842106a257153acf8d61b6e2d23f7b8f97742bd3dafb8b6adee6af59" Feb 19 10:39:49 crc kubenswrapper[4811]: I0219 10:39:49.928705 4811 scope.go:117] "RemoveContainer" containerID="c7a0b025e6f7bcbac0c12794f24bf43326935fcfdae32beceebac78261f1e559" Feb 19 10:39:50 crc kubenswrapper[4811]: I0219 10:39:50.004476 4811 scope.go:117] "RemoveContainer" containerID="272b02e6235c05038091dc13a6a228f3743e79220b6e507e629029169e307b73" Feb 19 10:40:02 crc kubenswrapper[4811]: I0219 10:40:02.590998 4811 scope.go:117] "RemoveContainer" containerID="3963827d2c3359bad8b876f1ddce43e835b48fcca50fea9ad20004f5e029bba9" Feb 19 10:40:02 crc kubenswrapper[4811]: E0219 10:40:02.592086 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:40:10 crc kubenswrapper[4811]: I0219 10:40:10.578285 4811 generic.go:334] "Generic (PLEG): container finished" podID="0409deb0-2253-48f7-8f17-3bf3c70088ef" containerID="e0aaef1e3c55c4d6f99359085b216670b92e92a179931758fabba3a146c75058" exitCode=0 Feb 19 10:40:10 crc kubenswrapper[4811]: I0219 10:40:10.578392 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6crt" event={"ID":"0409deb0-2253-48f7-8f17-3bf3c70088ef","Type":"ContainerDied","Data":"e0aaef1e3c55c4d6f99359085b216670b92e92a179931758fabba3a146c75058"} Feb 19 10:40:12 crc kubenswrapper[4811]: I0219 10:40:12.076297 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6crt" Feb 19 10:40:12 crc kubenswrapper[4811]: I0219 10:40:12.144894 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5xsk\" (UniqueName: \"kubernetes.io/projected/0409deb0-2253-48f7-8f17-3bf3c70088ef-kube-api-access-t5xsk\") pod \"0409deb0-2253-48f7-8f17-3bf3c70088ef\" (UID: \"0409deb0-2253-48f7-8f17-3bf3c70088ef\") " Feb 19 10:40:12 crc kubenswrapper[4811]: I0219 10:40:12.145336 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0409deb0-2253-48f7-8f17-3bf3c70088ef-inventory\") pod \"0409deb0-2253-48f7-8f17-3bf3c70088ef\" (UID: \"0409deb0-2253-48f7-8f17-3bf3c70088ef\") " Feb 19 10:40:12 crc kubenswrapper[4811]: I0219 10:40:12.145737 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0409deb0-2253-48f7-8f17-3bf3c70088ef-ssh-key-openstack-edpm-ipam\") pod \"0409deb0-2253-48f7-8f17-3bf3c70088ef\" (UID: \"0409deb0-2253-48f7-8f17-3bf3c70088ef\") " Feb 19 10:40:12 crc kubenswrapper[4811]: I0219 10:40:12.155772 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0409deb0-2253-48f7-8f17-3bf3c70088ef-kube-api-access-t5xsk" (OuterVolumeSpecName: "kube-api-access-t5xsk") pod "0409deb0-2253-48f7-8f17-3bf3c70088ef" (UID: "0409deb0-2253-48f7-8f17-3bf3c70088ef"). InnerVolumeSpecName "kube-api-access-t5xsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:40:12 crc kubenswrapper[4811]: I0219 10:40:12.183251 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0409deb0-2253-48f7-8f17-3bf3c70088ef-inventory" (OuterVolumeSpecName: "inventory") pod "0409deb0-2253-48f7-8f17-3bf3c70088ef" (UID: "0409deb0-2253-48f7-8f17-3bf3c70088ef"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:40:12 crc kubenswrapper[4811]: I0219 10:40:12.183309 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0409deb0-2253-48f7-8f17-3bf3c70088ef-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0409deb0-2253-48f7-8f17-3bf3c70088ef" (UID: "0409deb0-2253-48f7-8f17-3bf3c70088ef"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:40:12 crc kubenswrapper[4811]: I0219 10:40:12.249430 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0409deb0-2253-48f7-8f17-3bf3c70088ef-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:40:12 crc kubenswrapper[4811]: I0219 10:40:12.249524 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5xsk\" (UniqueName: \"kubernetes.io/projected/0409deb0-2253-48f7-8f17-3bf3c70088ef-kube-api-access-t5xsk\") on node \"crc\" DevicePath \"\"" Feb 19 10:40:12 crc kubenswrapper[4811]: I0219 10:40:12.249537 4811 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0409deb0-2253-48f7-8f17-3bf3c70088ef-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:40:12 crc kubenswrapper[4811]: I0219 10:40:12.600608 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6crt" event={"ID":"0409deb0-2253-48f7-8f17-3bf3c70088ef","Type":"ContainerDied","Data":"18ba5213ae468937b80cb5424af0b10b6d418ed34069155449f3dc6e1fe34468"} Feb 19 10:40:12 crc kubenswrapper[4811]: I0219 10:40:12.600672 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18ba5213ae468937b80cb5424af0b10b6d418ed34069155449f3dc6e1fe34468" Feb 19 10:40:12 crc kubenswrapper[4811]: I0219 10:40:12.600697 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6crt" Feb 19 10:40:12 crc kubenswrapper[4811]: I0219 10:40:12.784031 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pdvdx"] Feb 19 10:40:12 crc kubenswrapper[4811]: E0219 10:40:12.784878 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0409deb0-2253-48f7-8f17-3bf3c70088ef" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 19 10:40:12 crc kubenswrapper[4811]: I0219 10:40:12.785005 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="0409deb0-2253-48f7-8f17-3bf3c70088ef" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 19 10:40:12 crc kubenswrapper[4811]: I0219 10:40:12.785311 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="0409deb0-2253-48f7-8f17-3bf3c70088ef" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 19 10:40:12 crc kubenswrapper[4811]: I0219 10:40:12.786315 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pdvdx" Feb 19 10:40:12 crc kubenswrapper[4811]: I0219 10:40:12.789346 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:40:12 crc kubenswrapper[4811]: I0219 10:40:12.789772 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:40:12 crc kubenswrapper[4811]: I0219 10:40:12.790024 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wtknk" Feb 19 10:40:12 crc kubenswrapper[4811]: I0219 10:40:12.791159 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:40:12 crc kubenswrapper[4811]: I0219 10:40:12.798472 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pdvdx"] Feb 19 10:40:12 crc kubenswrapper[4811]: I0219 10:40:12.866795 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bvkg\" (UniqueName: \"kubernetes.io/projected/b972495c-f6be-4f80-81e5-ac652cf40950-kube-api-access-2bvkg\") pod \"ssh-known-hosts-edpm-deployment-pdvdx\" (UID: \"b972495c-f6be-4f80-81e5-ac652cf40950\") " pod="openstack/ssh-known-hosts-edpm-deployment-pdvdx" Feb 19 10:40:12 crc kubenswrapper[4811]: I0219 10:40:12.867175 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b972495c-f6be-4f80-81e5-ac652cf40950-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-pdvdx\" (UID: \"b972495c-f6be-4f80-81e5-ac652cf40950\") " pod="openstack/ssh-known-hosts-edpm-deployment-pdvdx" Feb 19 10:40:12 crc kubenswrapper[4811]: I0219 10:40:12.867433 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b972495c-f6be-4f80-81e5-ac652cf40950-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-pdvdx\" (UID: \"b972495c-f6be-4f80-81e5-ac652cf40950\") " pod="openstack/ssh-known-hosts-edpm-deployment-pdvdx" Feb 19 10:40:12 crc kubenswrapper[4811]: I0219 10:40:12.970093 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bvkg\" (UniqueName: \"kubernetes.io/projected/b972495c-f6be-4f80-81e5-ac652cf40950-kube-api-access-2bvkg\") pod \"ssh-known-hosts-edpm-deployment-pdvdx\" (UID: \"b972495c-f6be-4f80-81e5-ac652cf40950\") " pod="openstack/ssh-known-hosts-edpm-deployment-pdvdx" Feb 19 10:40:12 crc kubenswrapper[4811]: I0219 10:40:12.970248 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b972495c-f6be-4f80-81e5-ac652cf40950-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-pdvdx\" (UID: \"b972495c-f6be-4f80-81e5-ac652cf40950\") " pod="openstack/ssh-known-hosts-edpm-deployment-pdvdx" Feb 19 10:40:12 crc kubenswrapper[4811]: I0219 10:40:12.970315 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b972495c-f6be-4f80-81e5-ac652cf40950-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-pdvdx\" (UID: \"b972495c-f6be-4f80-81e5-ac652cf40950\") " pod="openstack/ssh-known-hosts-edpm-deployment-pdvdx" Feb 19 10:40:12 crc kubenswrapper[4811]: I0219 10:40:12.982607 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b972495c-f6be-4f80-81e5-ac652cf40950-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-pdvdx\" (UID: \"b972495c-f6be-4f80-81e5-ac652cf40950\") " pod="openstack/ssh-known-hosts-edpm-deployment-pdvdx" Feb 19 10:40:12 crc kubenswrapper[4811]: I0219 10:40:12.983979 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b972495c-f6be-4f80-81e5-ac652cf40950-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-pdvdx\" (UID: \"b972495c-f6be-4f80-81e5-ac652cf40950\") " pod="openstack/ssh-known-hosts-edpm-deployment-pdvdx" Feb 19 10:40:12 crc kubenswrapper[4811]: I0219 10:40:12.990342 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bvkg\" (UniqueName: \"kubernetes.io/projected/b972495c-f6be-4f80-81e5-ac652cf40950-kube-api-access-2bvkg\") pod \"ssh-known-hosts-edpm-deployment-pdvdx\" (UID: \"b972495c-f6be-4f80-81e5-ac652cf40950\") " pod="openstack/ssh-known-hosts-edpm-deployment-pdvdx" Feb 19 10:40:13 crc kubenswrapper[4811]: I0219 10:40:13.114223 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pdvdx" Feb 19 10:40:13 crc kubenswrapper[4811]: I0219 10:40:13.771276 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pdvdx"] Feb 19 10:40:14 crc kubenswrapper[4811]: I0219 10:40:14.622065 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pdvdx" event={"ID":"b972495c-f6be-4f80-81e5-ac652cf40950","Type":"ContainerStarted","Data":"729fa5df054dc654335c27aaf929513bf165bf5c46fb13a86af62d2d86a6b6b0"} Feb 19 10:40:15 crc kubenswrapper[4811]: I0219 10:40:15.583064 4811 scope.go:117] "RemoveContainer" containerID="3963827d2c3359bad8b876f1ddce43e835b48fcca50fea9ad20004f5e029bba9" Feb 19 10:40:15 crc kubenswrapper[4811]: E0219 10:40:15.583812 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:40:15 crc kubenswrapper[4811]: I0219 10:40:15.635235 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pdvdx" event={"ID":"b972495c-f6be-4f80-81e5-ac652cf40950","Type":"ContainerStarted","Data":"9c3a6c6e90c12da74153750aa3611fbf00dbf1931ae19abd98be8e6162ca336e"} Feb 19 10:40:15 crc kubenswrapper[4811]: I0219 10:40:15.663238 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-pdvdx" podStartSLOduration=3.078262274 podStartE2EDuration="3.663207459s" podCreationTimestamp="2026-02-19 10:40:12 +0000 UTC" firstStartedPulling="2026-02-19 10:40:13.78200441 +0000 UTC m=+1902.308469096" lastFinishedPulling="2026-02-19 10:40:14.366949575 +0000 UTC m=+1902.893414281" observedRunningTime="2026-02-19 10:40:15.652405474 +0000 UTC m=+1904.178870210" watchObservedRunningTime="2026-02-19 10:40:15.663207459 +0000 UTC m=+1904.189672135" Feb 19 10:40:21 crc kubenswrapper[4811]: I0219 10:40:21.695808 4811 generic.go:334] "Generic (PLEG): container finished" podID="b972495c-f6be-4f80-81e5-ac652cf40950" containerID="9c3a6c6e90c12da74153750aa3611fbf00dbf1931ae19abd98be8e6162ca336e" exitCode=0 Feb 19 10:40:21 crc kubenswrapper[4811]: I0219 10:40:21.695931 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pdvdx" event={"ID":"b972495c-f6be-4f80-81e5-ac652cf40950","Type":"ContainerDied","Data":"9c3a6c6e90c12da74153750aa3611fbf00dbf1931ae19abd98be8e6162ca336e"} Feb 19 10:40:23 crc kubenswrapper[4811]: I0219 10:40:23.160491 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pdvdx" Feb 19 10:40:23 crc kubenswrapper[4811]: I0219 10:40:23.342699 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bvkg\" (UniqueName: \"kubernetes.io/projected/b972495c-f6be-4f80-81e5-ac652cf40950-kube-api-access-2bvkg\") pod \"b972495c-f6be-4f80-81e5-ac652cf40950\" (UID: \"b972495c-f6be-4f80-81e5-ac652cf40950\") " Feb 19 10:40:23 crc kubenswrapper[4811]: I0219 10:40:23.343194 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b972495c-f6be-4f80-81e5-ac652cf40950-inventory-0\") pod \"b972495c-f6be-4f80-81e5-ac652cf40950\" (UID: \"b972495c-f6be-4f80-81e5-ac652cf40950\") " Feb 19 10:40:23 crc kubenswrapper[4811]: I0219 10:40:23.343329 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b972495c-f6be-4f80-81e5-ac652cf40950-ssh-key-openstack-edpm-ipam\") pod \"b972495c-f6be-4f80-81e5-ac652cf40950\" (UID: \"b972495c-f6be-4f80-81e5-ac652cf40950\") " Feb 19 10:40:23 crc kubenswrapper[4811]: I0219 10:40:23.350746 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b972495c-f6be-4f80-81e5-ac652cf40950-kube-api-access-2bvkg" (OuterVolumeSpecName: "kube-api-access-2bvkg") pod "b972495c-f6be-4f80-81e5-ac652cf40950" (UID: "b972495c-f6be-4f80-81e5-ac652cf40950"). InnerVolumeSpecName "kube-api-access-2bvkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:40:23 crc kubenswrapper[4811]: I0219 10:40:23.375605 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b972495c-f6be-4f80-81e5-ac652cf40950-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b972495c-f6be-4f80-81e5-ac652cf40950" (UID: "b972495c-f6be-4f80-81e5-ac652cf40950"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:40:23 crc kubenswrapper[4811]: I0219 10:40:23.378201 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b972495c-f6be-4f80-81e5-ac652cf40950-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "b972495c-f6be-4f80-81e5-ac652cf40950" (UID: "b972495c-f6be-4f80-81e5-ac652cf40950"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:40:23 crc kubenswrapper[4811]: I0219 10:40:23.445958 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bvkg\" (UniqueName: \"kubernetes.io/projected/b972495c-f6be-4f80-81e5-ac652cf40950-kube-api-access-2bvkg\") on node \"crc\" DevicePath \"\"" Feb 19 10:40:23 crc kubenswrapper[4811]: I0219 10:40:23.445995 4811 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b972495c-f6be-4f80-81e5-ac652cf40950-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:40:23 crc kubenswrapper[4811]: I0219 10:40:23.446009 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b972495c-f6be-4f80-81e5-ac652cf40950-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:40:23 crc kubenswrapper[4811]: I0219 10:40:23.720062 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pdvdx" event={"ID":"b972495c-f6be-4f80-81e5-ac652cf40950","Type":"ContainerDied","Data":"729fa5df054dc654335c27aaf929513bf165bf5c46fb13a86af62d2d86a6b6b0"} Feb 19 10:40:23 crc kubenswrapper[4811]: I0219 10:40:23.720121 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="729fa5df054dc654335c27aaf929513bf165bf5c46fb13a86af62d2d86a6b6b0" Feb 19 10:40:23 crc kubenswrapper[4811]: I0219 10:40:23.720155 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pdvdx" Feb 19 10:40:23 crc kubenswrapper[4811]: I0219 10:40:23.827997 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-mcrbf"] Feb 19 10:40:23 crc kubenswrapper[4811]: E0219 10:40:23.828760 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b972495c-f6be-4f80-81e5-ac652cf40950" containerName="ssh-known-hosts-edpm-deployment" Feb 19 10:40:23 crc kubenswrapper[4811]: I0219 10:40:23.828798 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="b972495c-f6be-4f80-81e5-ac652cf40950" containerName="ssh-known-hosts-edpm-deployment" Feb 19 10:40:23 crc kubenswrapper[4811]: I0219 10:40:23.829116 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="b972495c-f6be-4f80-81e5-ac652cf40950" containerName="ssh-known-hosts-edpm-deployment" Feb 19 10:40:23 crc kubenswrapper[4811]: I0219 10:40:23.830476 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mcrbf" Feb 19 10:40:23 crc kubenswrapper[4811]: I0219 10:40:23.833380 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:40:23 crc kubenswrapper[4811]: I0219 10:40:23.833771 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wtknk" Feb 19 10:40:23 crc kubenswrapper[4811]: I0219 10:40:23.833874 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:40:23 crc kubenswrapper[4811]: I0219 10:40:23.836546 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:40:23 crc kubenswrapper[4811]: I0219 10:40:23.846326 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-mcrbf"] Feb 19 10:40:23 crc kubenswrapper[4811]: I0219 10:40:23.960149 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6b88198f-8c56-4112-956d-2361512d5f2e-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mcrbf\" (UID: \"6b88198f-8c56-4112-956d-2361512d5f2e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mcrbf" Feb 19 10:40:23 crc kubenswrapper[4811]: I0219 10:40:23.960347 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv8fp\" (UniqueName: \"kubernetes.io/projected/6b88198f-8c56-4112-956d-2361512d5f2e-kube-api-access-pv8fp\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mcrbf\" (UID: \"6b88198f-8c56-4112-956d-2361512d5f2e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mcrbf" Feb 19 10:40:23 crc kubenswrapper[4811]: I0219 10:40:23.960487 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b88198f-8c56-4112-956d-2361512d5f2e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mcrbf\" (UID: \"6b88198f-8c56-4112-956d-2361512d5f2e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mcrbf" Feb 19 10:40:24 crc kubenswrapper[4811]: I0219 10:40:24.062253 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b88198f-8c56-4112-956d-2361512d5f2e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mcrbf\" (UID: \"6b88198f-8c56-4112-956d-2361512d5f2e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mcrbf" Feb 19 10:40:24 crc kubenswrapper[4811]: I0219 10:40:24.062398 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6b88198f-8c56-4112-956d-2361512d5f2e-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mcrbf\" (UID: \"6b88198f-8c56-4112-956d-2361512d5f2e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mcrbf" Feb 19 10:40:24 crc kubenswrapper[4811]: I0219 10:40:24.062472 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv8fp\" (UniqueName: \"kubernetes.io/projected/6b88198f-8c56-4112-956d-2361512d5f2e-kube-api-access-pv8fp\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mcrbf\" (UID: \"6b88198f-8c56-4112-956d-2361512d5f2e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mcrbf" Feb 19 10:40:24 crc kubenswrapper[4811]: I0219 10:40:24.066600 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b88198f-8c56-4112-956d-2361512d5f2e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mcrbf\" (UID: \"6b88198f-8c56-4112-956d-2361512d5f2e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mcrbf" Feb 19 10:40:24 crc kubenswrapper[4811]: I0219 10:40:24.066676 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6b88198f-8c56-4112-956d-2361512d5f2e-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mcrbf\" (UID: \"6b88198f-8c56-4112-956d-2361512d5f2e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mcrbf" Feb 19 10:40:24 crc kubenswrapper[4811]: I0219 10:40:24.085000 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv8fp\" (UniqueName: \"kubernetes.io/projected/6b88198f-8c56-4112-956d-2361512d5f2e-kube-api-access-pv8fp\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mcrbf\" (UID: \"6b88198f-8c56-4112-956d-2361512d5f2e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mcrbf" Feb 19 10:40:24 crc kubenswrapper[4811]: I0219 10:40:24.159033 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mcrbf" Feb 19 10:40:24 crc kubenswrapper[4811]: I0219 10:40:24.739120 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-mcrbf"] Feb 19 10:40:24 crc kubenswrapper[4811]: I0219 10:40:24.940434 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6vjdr"] Feb 19 10:40:24 crc kubenswrapper[4811]: I0219 10:40:24.944549 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6vjdr" Feb 19 10:40:24 crc kubenswrapper[4811]: I0219 10:40:24.962064 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6vjdr"] Feb 19 10:40:25 crc kubenswrapper[4811]: I0219 10:40:25.092377 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48p7v\" (UniqueName: \"kubernetes.io/projected/eb37647d-32f6-4f37-865d-e67077b17ba4-kube-api-access-48p7v\") pod \"redhat-operators-6vjdr\" (UID: \"eb37647d-32f6-4f37-865d-e67077b17ba4\") " pod="openshift-marketplace/redhat-operators-6vjdr" Feb 19 10:40:25 crc kubenswrapper[4811]: I0219 10:40:25.092514 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb37647d-32f6-4f37-865d-e67077b17ba4-catalog-content\") pod \"redhat-operators-6vjdr\" (UID: \"eb37647d-32f6-4f37-865d-e67077b17ba4\") " pod="openshift-marketplace/redhat-operators-6vjdr" Feb 19 10:40:25 crc kubenswrapper[4811]: I0219 10:40:25.092636 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb37647d-32f6-4f37-865d-e67077b17ba4-utilities\") pod \"redhat-operators-6vjdr\" (UID: \"eb37647d-32f6-4f37-865d-e67077b17ba4\") " pod="openshift-marketplace/redhat-operators-6vjdr" Feb 19 10:40:25 crc kubenswrapper[4811]: I0219 10:40:25.197111 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48p7v\" (UniqueName: \"kubernetes.io/projected/eb37647d-32f6-4f37-865d-e67077b17ba4-kube-api-access-48p7v\") pod \"redhat-operators-6vjdr\" (UID: \"eb37647d-32f6-4f37-865d-e67077b17ba4\") " pod="openshift-marketplace/redhat-operators-6vjdr" Feb 19 10:40:25 crc kubenswrapper[4811]: I0219 10:40:25.197187 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb37647d-32f6-4f37-865d-e67077b17ba4-catalog-content\") pod \"redhat-operators-6vjdr\" (UID: \"eb37647d-32f6-4f37-865d-e67077b17ba4\") " pod="openshift-marketplace/redhat-operators-6vjdr" Feb 19 10:40:25 crc kubenswrapper[4811]: I0219 10:40:25.197232 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb37647d-32f6-4f37-865d-e67077b17ba4-utilities\") pod \"redhat-operators-6vjdr\" (UID: \"eb37647d-32f6-4f37-865d-e67077b17ba4\") " pod="openshift-marketplace/redhat-operators-6vjdr" Feb 19 10:40:25 crc kubenswrapper[4811]: I0219 10:40:25.197739 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb37647d-32f6-4f37-865d-e67077b17ba4-catalog-content\") pod \"redhat-operators-6vjdr\" (UID: \"eb37647d-32f6-4f37-865d-e67077b17ba4\") " pod="openshift-marketplace/redhat-operators-6vjdr" Feb 19 10:40:25 crc kubenswrapper[4811]: I0219 10:40:25.197755 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb37647d-32f6-4f37-865d-e67077b17ba4-utilities\") pod \"redhat-operators-6vjdr\" (UID: \"eb37647d-32f6-4f37-865d-e67077b17ba4\") " pod="openshift-marketplace/redhat-operators-6vjdr" Feb 19 10:40:25 crc kubenswrapper[4811]: I0219 10:40:25.218346 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48p7v\" (UniqueName: \"kubernetes.io/projected/eb37647d-32f6-4f37-865d-e67077b17ba4-kube-api-access-48p7v\") pod \"redhat-operators-6vjdr\" (UID: \"eb37647d-32f6-4f37-865d-e67077b17ba4\") " pod="openshift-marketplace/redhat-operators-6vjdr" Feb 19 10:40:25 crc kubenswrapper[4811]: I0219 10:40:25.271243 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6vjdr" Feb 19 10:40:25 crc kubenswrapper[4811]: I0219 10:40:25.745136 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mcrbf" event={"ID":"6b88198f-8c56-4112-956d-2361512d5f2e","Type":"ContainerStarted","Data":"467b40af36578e38a4e5bd322a1320cffa34db2fb6092ef42e42e85002f2e4fb"} Feb 19 10:40:25 crc kubenswrapper[4811]: I0219 10:40:25.745783 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mcrbf" event={"ID":"6b88198f-8c56-4112-956d-2361512d5f2e","Type":"ContainerStarted","Data":"32f2f2225dbdeec75dd2dcf91fc72a38508bcc3d584fc2caca62e8d13b1b5f0d"} Feb 19 10:40:25 crc kubenswrapper[4811]: I0219 10:40:25.824700 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mcrbf" podStartSLOduration=2.41776006 podStartE2EDuration="2.824667034s" podCreationTimestamp="2026-02-19 10:40:23 +0000 UTC" firstStartedPulling="2026-02-19 10:40:24.765222385 +0000 UTC m=+1913.291687081" lastFinishedPulling="2026-02-19 10:40:25.172129359 +0000 UTC m=+1913.698594055" observedRunningTime="2026-02-19 10:40:25.768043103 +0000 UTC m=+1914.294507809" watchObservedRunningTime="2026-02-19 10:40:25.824667034 +0000 UTC m=+1914.351131720" Feb 19 10:40:25 crc kubenswrapper[4811]: I0219 10:40:25.827940 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6vjdr"] Feb 19 10:40:25 crc kubenswrapper[4811]: W0219 10:40:25.838370 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb37647d_32f6_4f37_865d_e67077b17ba4.slice/crio-841cd5df8140b3ca4861cfa2b96195db9be6b5a1cba4bb4339e5458bd6382cb3 WatchSource:0}: Error finding container 841cd5df8140b3ca4861cfa2b96195db9be6b5a1cba4bb4339e5458bd6382cb3: Status 404 returned error can't find the container with id 841cd5df8140b3ca4861cfa2b96195db9be6b5a1cba4bb4339e5458bd6382cb3 Feb 19 10:40:26 crc kubenswrapper[4811]: I0219 10:40:26.074102 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-654xk"] Feb 19 10:40:26 crc kubenswrapper[4811]: I0219 10:40:26.086914 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-654xk"] Feb 19 10:40:26 crc kubenswrapper[4811]: I0219 10:40:26.600516 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b0af941-4e43-49e9-903b-438680c89e3c" path="/var/lib/kubelet/pods/1b0af941-4e43-49e9-903b-438680c89e3c/volumes" Feb 19 10:40:26 crc kubenswrapper[4811]: I0219 10:40:26.768171 4811 generic.go:334] "Generic (PLEG): container finished" podID="eb37647d-32f6-4f37-865d-e67077b17ba4" containerID="32287af19a1972bbc67b5965c314f0cd3bd4b22af17fa76f324a2bfa364b7ad0" exitCode=0 Feb 19 10:40:26 crc kubenswrapper[4811]: I0219 10:40:26.768275 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vjdr" event={"ID":"eb37647d-32f6-4f37-865d-e67077b17ba4","Type":"ContainerDied","Data":"32287af19a1972bbc67b5965c314f0cd3bd4b22af17fa76f324a2bfa364b7ad0"} Feb 19 10:40:26 crc kubenswrapper[4811]: I0219 10:40:26.768339 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vjdr" event={"ID":"eb37647d-32f6-4f37-865d-e67077b17ba4","Type":"ContainerStarted","Data":"841cd5df8140b3ca4861cfa2b96195db9be6b5a1cba4bb4339e5458bd6382cb3"} Feb 19 10:40:27 crc kubenswrapper[4811]: I0219 10:40:27.332701 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jjrh2"] Feb 19 10:40:27 crc kubenswrapper[4811]: I0219 10:40:27.338235 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jjrh2" Feb 19 10:40:27 crc kubenswrapper[4811]: I0219 10:40:27.343997 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jjrh2"] Feb 19 10:40:27 crc kubenswrapper[4811]: I0219 10:40:27.382528 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-764kg\" (UniqueName: \"kubernetes.io/projected/fb5be783-336b-4e72-b544-003e4ec3e969-kube-api-access-764kg\") pod \"community-operators-jjrh2\" (UID: \"fb5be783-336b-4e72-b544-003e4ec3e969\") " pod="openshift-marketplace/community-operators-jjrh2" Feb 19 10:40:27 crc kubenswrapper[4811]: I0219 10:40:27.382639 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb5be783-336b-4e72-b544-003e4ec3e969-utilities\") pod \"community-operators-jjrh2\" (UID: \"fb5be783-336b-4e72-b544-003e4ec3e969\") " pod="openshift-marketplace/community-operators-jjrh2" Feb 19 10:40:27 crc kubenswrapper[4811]: I0219 10:40:27.382718 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb5be783-336b-4e72-b544-003e4ec3e969-catalog-content\") pod \"community-operators-jjrh2\" (UID: \"fb5be783-336b-4e72-b544-003e4ec3e969\") " pod="openshift-marketplace/community-operators-jjrh2" Feb 19 10:40:27 crc kubenswrapper[4811]: I0219 10:40:27.485210 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb5be783-336b-4e72-b544-003e4ec3e969-catalog-content\") pod \"community-operators-jjrh2\" (UID: \"fb5be783-336b-4e72-b544-003e4ec3e969\") " pod="openshift-marketplace/community-operators-jjrh2" Feb 19 10:40:27 crc kubenswrapper[4811]: I0219 10:40:27.485408 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-764kg\" (UniqueName: \"kubernetes.io/projected/fb5be783-336b-4e72-b544-003e4ec3e969-kube-api-access-764kg\") pod \"community-operators-jjrh2\" (UID: \"fb5be783-336b-4e72-b544-003e4ec3e969\") " pod="openshift-marketplace/community-operators-jjrh2" Feb 19 10:40:27 crc kubenswrapper[4811]: I0219 10:40:27.485803 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb5be783-336b-4e72-b544-003e4ec3e969-utilities\") pod \"community-operators-jjrh2\" (UID: \"fb5be783-336b-4e72-b544-003e4ec3e969\") " pod="openshift-marketplace/community-operators-jjrh2" Feb 19 10:40:27 crc kubenswrapper[4811]: I0219 10:40:27.486072 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb5be783-336b-4e72-b544-003e4ec3e969-catalog-content\") pod \"community-operators-jjrh2\" (UID: \"fb5be783-336b-4e72-b544-003e4ec3e969\") " pod="openshift-marketplace/community-operators-jjrh2" Feb 19 10:40:27 crc kubenswrapper[4811]: I0219 10:40:27.486344 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb5be783-336b-4e72-b544-003e4ec3e969-utilities\") pod \"community-operators-jjrh2\" (UID: \"fb5be783-336b-4e72-b544-003e4ec3e969\") " pod="openshift-marketplace/community-operators-jjrh2" Feb 19 10:40:27 crc kubenswrapper[4811]: I0219 10:40:27.512039 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-764kg\" (UniqueName: \"kubernetes.io/projected/fb5be783-336b-4e72-b544-003e4ec3e969-kube-api-access-764kg\") pod \"community-operators-jjrh2\" (UID: \"fb5be783-336b-4e72-b544-003e4ec3e969\") " pod="openshift-marketplace/community-operators-jjrh2" Feb 19 10:40:27 crc kubenswrapper[4811]: I0219 10:40:27.529966 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-x77p9"] Feb 19 10:40:27 crc kubenswrapper[4811]: I0219 10:40:27.534259 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x77p9" Feb 19 10:40:27 crc kubenswrapper[4811]: I0219 10:40:27.549038 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x77p9"] Feb 19 10:40:27 crc kubenswrapper[4811]: I0219 10:40:27.590104 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef8d2c93-e114-4869-a2b1-2429228f494c-utilities\") pod \"redhat-marketplace-x77p9\" (UID: \"ef8d2c93-e114-4869-a2b1-2429228f494c\") " pod="openshift-marketplace/redhat-marketplace-x77p9" Feb 19 10:40:27 crc kubenswrapper[4811]: I0219 10:40:27.590491 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv8nx\" (UniqueName: \"kubernetes.io/projected/ef8d2c93-e114-4869-a2b1-2429228f494c-kube-api-access-jv8nx\") pod \"redhat-marketplace-x77p9\" (UID: \"ef8d2c93-e114-4869-a2b1-2429228f494c\") " pod="openshift-marketplace/redhat-marketplace-x77p9" Feb 19 10:40:27 crc kubenswrapper[4811]: I0219 10:40:27.590613 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef8d2c93-e114-4869-a2b1-2429228f494c-catalog-content\") pod \"redhat-marketplace-x77p9\" (UID: \"ef8d2c93-e114-4869-a2b1-2429228f494c\") " pod="openshift-marketplace/redhat-marketplace-x77p9" Feb 19 10:40:27 crc kubenswrapper[4811]: I0219 10:40:27.693594 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef8d2c93-e114-4869-a2b1-2429228f494c-utilities\") pod \"redhat-marketplace-x77p9\" (UID: \"ef8d2c93-e114-4869-a2b1-2429228f494c\") " pod="openshift-marketplace/redhat-marketplace-x77p9" Feb 19 10:40:27 crc kubenswrapper[4811]: I0219 10:40:27.693663 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv8nx\" (UniqueName: \"kubernetes.io/projected/ef8d2c93-e114-4869-a2b1-2429228f494c-kube-api-access-jv8nx\") pod \"redhat-marketplace-x77p9\" (UID: \"ef8d2c93-e114-4869-a2b1-2429228f494c\") " pod="openshift-marketplace/redhat-marketplace-x77p9" Feb 19 10:40:27 crc kubenswrapper[4811]: I0219 10:40:27.693698 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef8d2c93-e114-4869-a2b1-2429228f494c-catalog-content\") pod \"redhat-marketplace-x77p9\" (UID: \"ef8d2c93-e114-4869-a2b1-2429228f494c\") " pod="openshift-marketplace/redhat-marketplace-x77p9" Feb 19 10:40:27 crc kubenswrapper[4811]: I0219 10:40:27.694307 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef8d2c93-e114-4869-a2b1-2429228f494c-utilities\") pod \"redhat-marketplace-x77p9\" (UID: \"ef8d2c93-e114-4869-a2b1-2429228f494c\") " pod="openshift-marketplace/redhat-marketplace-x77p9" Feb 19 10:40:27 crc kubenswrapper[4811]: I0219 10:40:27.694323 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef8d2c93-e114-4869-a2b1-2429228f494c-catalog-content\") pod \"redhat-marketplace-x77p9\" (UID: \"ef8d2c93-e114-4869-a2b1-2429228f494c\") " pod="openshift-marketplace/redhat-marketplace-x77p9" Feb 19 10:40:27 crc kubenswrapper[4811]: I0219 10:40:27.718744 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv8nx\" (UniqueName: \"kubernetes.io/projected/ef8d2c93-e114-4869-a2b1-2429228f494c-kube-api-access-jv8nx\") pod \"redhat-marketplace-x77p9\" (UID: \"ef8d2c93-e114-4869-a2b1-2429228f494c\") " pod="openshift-marketplace/redhat-marketplace-x77p9" Feb 19 10:40:27 crc kubenswrapper[4811]: I0219 10:40:27.732322 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jjrh2" Feb 19 10:40:27 crc kubenswrapper[4811]: I0219 10:40:27.905686 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x77p9" Feb 19 10:40:28 crc kubenswrapper[4811]: W0219 10:40:28.473589 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb5be783_336b_4e72_b544_003e4ec3e969.slice/crio-3688d421c46b3c551e2e10732f40b7184ee0d50cc72c33d7756e80f111bcd01e WatchSource:0}: Error finding container 3688d421c46b3c551e2e10732f40b7184ee0d50cc72c33d7756e80f111bcd01e: Status 404 returned error can't find the container with id 3688d421c46b3c551e2e10732f40b7184ee0d50cc72c33d7756e80f111bcd01e Feb 19 10:40:28 crc kubenswrapper[4811]: I0219 10:40:28.480073 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jjrh2"] Feb 19 10:40:28 crc kubenswrapper[4811]: W0219 10:40:28.596663 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef8d2c93_e114_4869_a2b1_2429228f494c.slice/crio-64ca8ec4f4936e24a804ef6a8115583d95a0856cb41432391c8c974ed6efc5a8 WatchSource:0}: Error finding container 64ca8ec4f4936e24a804ef6a8115583d95a0856cb41432391c8c974ed6efc5a8: Status 404 returned error can't find the container with id 64ca8ec4f4936e24a804ef6a8115583d95a0856cb41432391c8c974ed6efc5a8 Feb 19 10:40:28 crc kubenswrapper[4811]: I0219 10:40:28.597260 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x77p9"] Feb 19 10:40:28 crc kubenswrapper[4811]: I0219 10:40:28.807298 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x77p9" event={"ID":"ef8d2c93-e114-4869-a2b1-2429228f494c","Type":"ContainerStarted","Data":"64ca8ec4f4936e24a804ef6a8115583d95a0856cb41432391c8c974ed6efc5a8"} Feb 19 10:40:28 crc kubenswrapper[4811]: I0219 10:40:28.812561 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjrh2" event={"ID":"fb5be783-336b-4e72-b544-003e4ec3e969","Type":"ContainerStarted","Data":"3688d421c46b3c551e2e10732f40b7184ee0d50cc72c33d7756e80f111bcd01e"} Feb 19 10:40:28 crc kubenswrapper[4811]: I0219 10:40:28.815128 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vjdr" event={"ID":"eb37647d-32f6-4f37-865d-e67077b17ba4","Type":"ContainerStarted","Data":"ff24cca4a84ff4357a6837dfcb7505d4337a06fcddf0fe3f8738c18927df8651"} Feb 19 10:40:29 crc kubenswrapper[4811]: I0219 10:40:29.829810 4811 generic.go:334] "Generic (PLEG): container finished" podID="fb5be783-336b-4e72-b544-003e4ec3e969" containerID="9265337708bcc10c2fb01fdec02d88552de5e65dabc6671a02e59d2c9d6e2fb4" exitCode=0 Feb 19 10:40:29 crc kubenswrapper[4811]: I0219 10:40:29.829886 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjrh2" event={"ID":"fb5be783-336b-4e72-b544-003e4ec3e969","Type":"ContainerDied","Data":"9265337708bcc10c2fb01fdec02d88552de5e65dabc6671a02e59d2c9d6e2fb4"} Feb 19 10:40:29 crc kubenswrapper[4811]: I0219 10:40:29.833015 4811 generic.go:334] "Generic (PLEG): container finished" podID="eb37647d-32f6-4f37-865d-e67077b17ba4" containerID="ff24cca4a84ff4357a6837dfcb7505d4337a06fcddf0fe3f8738c18927df8651" exitCode=0 Feb 19 10:40:29 crc kubenswrapper[4811]: I0219 10:40:29.833123 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vjdr" event={"ID":"eb37647d-32f6-4f37-865d-e67077b17ba4","Type":"ContainerDied","Data":"ff24cca4a84ff4357a6837dfcb7505d4337a06fcddf0fe3f8738c18927df8651"} Feb 19 10:40:29 crc kubenswrapper[4811]: I0219 10:40:29.844974 4811 generic.go:334] "Generic (PLEG): container finished" podID="ef8d2c93-e114-4869-a2b1-2429228f494c" containerID="6df524b6167df1faa03ac0bdcb61ecd69fa177a3b2135807d55c713bcce49241" exitCode=0 Feb 19 10:40:29 crc kubenswrapper[4811]: I0219 10:40:29.845044 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x77p9" event={"ID":"ef8d2c93-e114-4869-a2b1-2429228f494c","Type":"ContainerDied","Data":"6df524b6167df1faa03ac0bdcb61ecd69fa177a3b2135807d55c713bcce49241"} Feb 19 10:40:30 crc kubenswrapper[4811]: I0219 10:40:30.582970 4811 scope.go:117] "RemoveContainer" containerID="3963827d2c3359bad8b876f1ddce43e835b48fcca50fea9ad20004f5e029bba9" Feb 19 10:40:30 crc kubenswrapper[4811]: E0219 10:40:30.584335 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:40:33 crc kubenswrapper[4811]: I0219 10:40:33.062720 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vjdr" event={"ID":"eb37647d-32f6-4f37-865d-e67077b17ba4","Type":"ContainerStarted","Data":"f1907f5e2369753b748154ef0c4238825fc92ae4819ad33e0a5bf9c0d73b583a"} Feb 19 10:40:33 crc kubenswrapper[4811]: I0219 10:40:33.066881 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x77p9" event={"ID":"ef8d2c93-e114-4869-a2b1-2429228f494c","Type":"ContainerStarted","Data":"b7c1881af88891f9869a6fd238a5996a0beda7698ad0538d2118065e616a5911"} Feb 19 10:40:33 crc kubenswrapper[4811]: I0219 10:40:33.072813 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjrh2" event={"ID":"fb5be783-336b-4e72-b544-003e4ec3e969","Type":"ContainerStarted","Data":"a90722a4328b4506d9cb964544e45255b6fbcec53a99a89d3269f3c4f146bd38"} Feb 19 10:40:33 crc kubenswrapper[4811]: I0219 10:40:33.097057 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6vjdr" podStartSLOduration=3.241320642 podStartE2EDuration="9.097029575s" podCreationTimestamp="2026-02-19 10:40:24 +0000 UTC" firstStartedPulling="2026-02-19 10:40:26.778494665 +0000 UTC m=+1915.304959341" lastFinishedPulling="2026-02-19 10:40:32.634203588 +0000 UTC m=+1921.160668274" observedRunningTime="2026-02-19 10:40:33.088400246 +0000 UTC m=+1921.614864932" watchObservedRunningTime="2026-02-19 10:40:33.097029575 +0000 UTC m=+1921.623494261" Feb 19 10:40:34 crc kubenswrapper[4811]: I0219 10:40:34.085262 4811 generic.go:334] "Generic (PLEG): container finished" podID="ef8d2c93-e114-4869-a2b1-2429228f494c" containerID="b7c1881af88891f9869a6fd238a5996a0beda7698ad0538d2118065e616a5911" exitCode=0 Feb 19 10:40:34 crc kubenswrapper[4811]: I0219 10:40:34.085415 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x77p9" event={"ID":"ef8d2c93-e114-4869-a2b1-2429228f494c","Type":"ContainerDied","Data":"b7c1881af88891f9869a6fd238a5996a0beda7698ad0538d2118065e616a5911"} Feb 19 10:40:34 crc kubenswrapper[4811]: I0219 10:40:34.088190 4811 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 10:40:35 crc kubenswrapper[4811]: I0219 10:40:35.111600 4811 generic.go:334] "Generic (PLEG): container finished" podID="fb5be783-336b-4e72-b544-003e4ec3e969" containerID="a90722a4328b4506d9cb964544e45255b6fbcec53a99a89d3269f3c4f146bd38" exitCode=0 Feb 19 10:40:35 crc kubenswrapper[4811]: I0219 10:40:35.112038 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjrh2" event={"ID":"fb5be783-336b-4e72-b544-003e4ec3e969","Type":"ContainerDied","Data":"a90722a4328b4506d9cb964544e45255b6fbcec53a99a89d3269f3c4f146bd38"} Feb 19 10:40:35 crc kubenswrapper[4811]: I0219 10:40:35.271331 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6vjdr" Feb 19 10:40:35 crc kubenswrapper[4811]: I0219 10:40:35.271388 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6vjdr" Feb 19 10:40:36 crc kubenswrapper[4811]: I0219 10:40:36.126109 4811 generic.go:334] "Generic (PLEG): container finished" podID="6b88198f-8c56-4112-956d-2361512d5f2e" containerID="467b40af36578e38a4e5bd322a1320cffa34db2fb6092ef42e42e85002f2e4fb" exitCode=0 Feb 19 10:40:36 crc kubenswrapper[4811]: I0219 10:40:36.126201 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mcrbf" event={"ID":"6b88198f-8c56-4112-956d-2361512d5f2e","Type":"ContainerDied","Data":"467b40af36578e38a4e5bd322a1320cffa34db2fb6092ef42e42e85002f2e4fb"} Feb 19 10:40:36 crc kubenswrapper[4811]: I0219 10:40:36.336876 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6vjdr" podUID="eb37647d-32f6-4f37-865d-e67077b17ba4" containerName="registry-server" probeResult="failure" output=< Feb 19 10:40:36 crc kubenswrapper[4811]: timeout: failed to connect service ":50051" within 1s Feb 19 10:40:36 crc kubenswrapper[4811]: > Feb 19 10:40:37 crc kubenswrapper[4811]: I0219 10:40:37.137108 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjrh2" event={"ID":"fb5be783-336b-4e72-b544-003e4ec3e969","Type":"ContainerStarted","Data":"c93a1b2b5354e526f11658811563bb6815c289aac1cf8a740133d1aa19787883"} Feb 19 10:40:37 crc kubenswrapper[4811]: I0219 10:40:37.140202 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x77p9" event={"ID":"ef8d2c93-e114-4869-a2b1-2429228f494c","Type":"ContainerStarted","Data":"d1b22c84f98c76a61218ac3e487b4c786f4ec0323db7addbf2bec1c14ac49116"} Feb 19 10:40:37 crc kubenswrapper[4811]: I0219 10:40:37.162091 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jjrh2" podStartSLOduration=3.651381943 podStartE2EDuration="10.162063543s" podCreationTimestamp="2026-02-19 10:40:27 +0000 UTC" firstStartedPulling="2026-02-19 10:40:29.833322177 +0000 UTC m=+1918.359786863" lastFinishedPulling="2026-02-19 10:40:36.344003777 +0000 UTC m=+1924.870468463" observedRunningTime="2026-02-19 10:40:37.160236217 +0000 UTC m=+1925.686700903" watchObservedRunningTime="2026-02-19 10:40:37.162063543 +0000 UTC m=+1925.688528239" Feb 19 10:40:37 crc kubenswrapper[4811]: I0219 10:40:37.210744 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-x77p9" podStartSLOduration=3.766211696 podStartE2EDuration="10.210422164s" podCreationTimestamp="2026-02-19 10:40:27 +0000 UTC" firstStartedPulling="2026-02-19 10:40:29.847058837 +0000 UTC m=+1918.373523523" lastFinishedPulling="2026-02-19 10:40:36.291269305 +0000 UTC m=+1924.817733991" observedRunningTime="2026-02-19 10:40:37.182617606 +0000 UTC m=+1925.709082282" watchObservedRunningTime="2026-02-19 10:40:37.210422164 +0000 UTC m=+1925.736886850" Feb 19 10:40:37 crc kubenswrapper[4811]: I0219 10:40:37.650790 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mcrbf" Feb 19 10:40:37 crc kubenswrapper[4811]: I0219 10:40:37.733832 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jjrh2" Feb 19 10:40:37 crc kubenswrapper[4811]: I0219 10:40:37.733878 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jjrh2" Feb 19 10:40:37 crc kubenswrapper[4811]: I0219 10:40:37.784426 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b88198f-8c56-4112-956d-2361512d5f2e-inventory\") pod \"6b88198f-8c56-4112-956d-2361512d5f2e\" (UID: \"6b88198f-8c56-4112-956d-2361512d5f2e\") " Feb 19 10:40:37 crc kubenswrapper[4811]: I0219 10:40:37.784791 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6b88198f-8c56-4112-956d-2361512d5f2e-ssh-key-openstack-edpm-ipam\") pod \"6b88198f-8c56-4112-956d-2361512d5f2e\" (UID: \"6b88198f-8c56-4112-956d-2361512d5f2e\") " Feb 19 10:40:37 crc kubenswrapper[4811]: I0219 10:40:37.784902 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pv8fp\" (UniqueName: \"kubernetes.io/projected/6b88198f-8c56-4112-956d-2361512d5f2e-kube-api-access-pv8fp\") pod \"6b88198f-8c56-4112-956d-2361512d5f2e\" (UID: \"6b88198f-8c56-4112-956d-2361512d5f2e\") " Feb 19 10:40:37 crc kubenswrapper[4811]: I0219 10:40:37.792657 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b88198f-8c56-4112-956d-2361512d5f2e-kube-api-access-pv8fp" (OuterVolumeSpecName: "kube-api-access-pv8fp") pod "6b88198f-8c56-4112-956d-2361512d5f2e" (UID: "6b88198f-8c56-4112-956d-2361512d5f2e"). InnerVolumeSpecName "kube-api-access-pv8fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:40:37 crc kubenswrapper[4811]: I0219 10:40:37.819194 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b88198f-8c56-4112-956d-2361512d5f2e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6b88198f-8c56-4112-956d-2361512d5f2e" (UID: "6b88198f-8c56-4112-956d-2361512d5f2e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:40:37 crc kubenswrapper[4811]: I0219 10:40:37.820644 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b88198f-8c56-4112-956d-2361512d5f2e-inventory" (OuterVolumeSpecName: "inventory") pod "6b88198f-8c56-4112-956d-2361512d5f2e" (UID: "6b88198f-8c56-4112-956d-2361512d5f2e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:40:37 crc kubenswrapper[4811]: I0219 10:40:37.889673 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6b88198f-8c56-4112-956d-2361512d5f2e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:40:37 crc kubenswrapper[4811]: I0219 10:40:37.889719 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pv8fp\" (UniqueName: \"kubernetes.io/projected/6b88198f-8c56-4112-956d-2361512d5f2e-kube-api-access-pv8fp\") on node \"crc\" DevicePath \"\"" Feb 19 10:40:37 crc kubenswrapper[4811]: I0219 10:40:37.889734 4811 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b88198f-8c56-4112-956d-2361512d5f2e-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:40:37 crc kubenswrapper[4811]: I0219 10:40:37.906856 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-x77p9" Feb 19 10:40:37 crc kubenswrapper[4811]: I0219 10:40:37.906972 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-x77p9" Feb 19 10:40:38 crc kubenswrapper[4811]: I0219 10:40:38.152012 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mcrbf" Feb 19 10:40:38 crc kubenswrapper[4811]: I0219 10:40:38.154562 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mcrbf" event={"ID":"6b88198f-8c56-4112-956d-2361512d5f2e","Type":"ContainerDied","Data":"32f2f2225dbdeec75dd2dcf91fc72a38508bcc3d584fc2caca62e8d13b1b5f0d"} Feb 19 10:40:38 crc kubenswrapper[4811]: I0219 10:40:38.154599 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32f2f2225dbdeec75dd2dcf91fc72a38508bcc3d584fc2caca62e8d13b1b5f0d" Feb 19 10:40:38 crc kubenswrapper[4811]: I0219 10:40:38.238307 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br5xl"] Feb 19 10:40:38 crc kubenswrapper[4811]: E0219 10:40:38.238959 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b88198f-8c56-4112-956d-2361512d5f2e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 19 10:40:38 crc kubenswrapper[4811]: I0219 10:40:38.238983 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b88198f-8c56-4112-956d-2361512d5f2e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 19 10:40:38 crc kubenswrapper[4811]: I0219 10:40:38.239210 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b88198f-8c56-4112-956d-2361512d5f2e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 19 10:40:38 crc kubenswrapper[4811]: I0219 10:40:38.240062 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br5xl" Feb 19 10:40:38 crc kubenswrapper[4811]: I0219 10:40:38.242754 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:40:38 crc kubenswrapper[4811]: I0219 10:40:38.242996 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:40:38 crc kubenswrapper[4811]: I0219 10:40:38.243881 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wtknk" Feb 19 10:40:38 crc kubenswrapper[4811]: I0219 10:40:38.244359 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:40:38 crc kubenswrapper[4811]: I0219 10:40:38.251104 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br5xl"] Feb 19 10:40:38 crc kubenswrapper[4811]: I0219 10:40:38.400900 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrngd\" (UniqueName: \"kubernetes.io/projected/6563b2b6-b15d-48ca-9c1f-75f9f17321c9-kube-api-access-vrngd\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-br5xl\" (UID: \"6563b2b6-b15d-48ca-9c1f-75f9f17321c9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br5xl" Feb 19 10:40:38 crc kubenswrapper[4811]: I0219 10:40:38.400951 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6563b2b6-b15d-48ca-9c1f-75f9f17321c9-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-br5xl\" (UID: \"6563b2b6-b15d-48ca-9c1f-75f9f17321c9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br5xl" Feb 19 10:40:38 crc kubenswrapper[4811]: I0219 10:40:38.401067 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6563b2b6-b15d-48ca-9c1f-75f9f17321c9-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-br5xl\" (UID: \"6563b2b6-b15d-48ca-9c1f-75f9f17321c9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br5xl" Feb 19 10:40:38 crc kubenswrapper[4811]: I0219 10:40:38.502756 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6563b2b6-b15d-48ca-9c1f-75f9f17321c9-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-br5xl\" (UID: \"6563b2b6-b15d-48ca-9c1f-75f9f17321c9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br5xl" Feb 19 10:40:38 crc kubenswrapper[4811]: I0219 10:40:38.502923 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrngd\" (UniqueName: \"kubernetes.io/projected/6563b2b6-b15d-48ca-9c1f-75f9f17321c9-kube-api-access-vrngd\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-br5xl\" (UID: \"6563b2b6-b15d-48ca-9c1f-75f9f17321c9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br5xl" Feb 19 10:40:38 crc kubenswrapper[4811]: I0219 10:40:38.502948 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6563b2b6-b15d-48ca-9c1f-75f9f17321c9-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-br5xl\" (UID: \"6563b2b6-b15d-48ca-9c1f-75f9f17321c9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br5xl" Feb 19 10:40:38 crc kubenswrapper[4811]: I0219 10:40:38.522416 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6563b2b6-b15d-48ca-9c1f-75f9f17321c9-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-br5xl\" (UID: \"6563b2b6-b15d-48ca-9c1f-75f9f17321c9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br5xl" Feb 19 10:40:38 crc kubenswrapper[4811]: I0219 10:40:38.522680 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6563b2b6-b15d-48ca-9c1f-75f9f17321c9-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-br5xl\" (UID: \"6563b2b6-b15d-48ca-9c1f-75f9f17321c9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br5xl" Feb 19 10:40:38 crc kubenswrapper[4811]: I0219 10:40:38.528974 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrngd\" (UniqueName: \"kubernetes.io/projected/6563b2b6-b15d-48ca-9c1f-75f9f17321c9-kube-api-access-vrngd\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-br5xl\" (UID: \"6563b2b6-b15d-48ca-9c1f-75f9f17321c9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br5xl" Feb 19 10:40:38 crc kubenswrapper[4811]: I0219 10:40:38.559848 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br5xl" Feb 19 10:40:38 crc kubenswrapper[4811]: I0219 10:40:38.792901 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-jjrh2" podUID="fb5be783-336b-4e72-b544-003e4ec3e969" containerName="registry-server" probeResult="failure" output=< Feb 19 10:40:38 crc kubenswrapper[4811]: timeout: failed to connect service ":50051" within 1s Feb 19 10:40:38 crc kubenswrapper[4811]: > Feb 19 10:40:38 crc kubenswrapper[4811]: I0219 10:40:38.965921 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-x77p9" podUID="ef8d2c93-e114-4869-a2b1-2429228f494c" containerName="registry-server" probeResult="failure" output=< Feb 19 10:40:38 crc kubenswrapper[4811]: timeout: failed to connect service ":50051" within 1s Feb 19 10:40:38 crc kubenswrapper[4811]: > Feb 19 10:40:39 crc kubenswrapper[4811]: I0219 10:40:39.131485 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br5xl"] Feb 19 10:40:39 crc kubenswrapper[4811]: I0219 10:40:39.164931 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br5xl" event={"ID":"6563b2b6-b15d-48ca-9c1f-75f9f17321c9","Type":"ContainerStarted","Data":"986f5b4d5f7c2147b002d16c2ba654df12e1751cdac806e9124d878aef7fc9a0"} Feb 19 10:40:40 crc kubenswrapper[4811]: I0219 10:40:40.179166 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br5xl" event={"ID":"6563b2b6-b15d-48ca-9c1f-75f9f17321c9","Type":"ContainerStarted","Data":"e77e953b2c0073341df9e6025e5224149ba2138e6d041b265b6f8b536508b6e5"} Feb 19 10:40:40 crc kubenswrapper[4811]: I0219 10:40:40.207086 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br5xl" podStartSLOduration=1.735819684 podStartE2EDuration="2.207058995s" podCreationTimestamp="2026-02-19 10:40:38 +0000 UTC" firstStartedPulling="2026-02-19 10:40:39.144691602 +0000 UTC m=+1927.671156288" lastFinishedPulling="2026-02-19 10:40:39.615930913 +0000 UTC m=+1928.142395599" observedRunningTime="2026-02-19 10:40:40.196089976 +0000 UTC m=+1928.722554672" watchObservedRunningTime="2026-02-19 10:40:40.207058995 +0000 UTC m=+1928.733523681" Feb 19 10:40:42 crc kubenswrapper[4811]: I0219 10:40:42.589933 4811 scope.go:117] "RemoveContainer" containerID="3963827d2c3359bad8b876f1ddce43e835b48fcca50fea9ad20004f5e029bba9" Feb 19 10:40:42 crc kubenswrapper[4811]: E0219 10:40:42.590808 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:40:46 crc kubenswrapper[4811]: I0219 10:40:46.318405 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6vjdr" podUID="eb37647d-32f6-4f37-865d-e67077b17ba4" containerName="registry-server" probeResult="failure" output=< Feb 19 10:40:46 crc kubenswrapper[4811]: timeout: failed to connect service ":50051" within 1s Feb 19 10:40:46 crc kubenswrapper[4811]: > Feb 19 10:40:47 crc kubenswrapper[4811]: I0219 10:40:47.799477 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jjrh2" Feb 19 10:40:47 crc kubenswrapper[4811]: I0219 10:40:47.857019 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jjrh2" Feb 19 10:40:47 crc kubenswrapper[4811]: I0219 10:40:47.968083 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-x77p9" Feb 19 10:40:48 crc kubenswrapper[4811]: I0219 10:40:48.022594 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-x77p9" Feb 19 10:40:49 crc kubenswrapper[4811]: I0219 10:40:49.142116 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x77p9"] Feb 19 10:40:49 crc kubenswrapper[4811]: I0219 10:40:49.268993 4811 generic.go:334] "Generic (PLEG): container finished" podID="6563b2b6-b15d-48ca-9c1f-75f9f17321c9" containerID="e77e953b2c0073341df9e6025e5224149ba2138e6d041b265b6f8b536508b6e5" exitCode=0 Feb 19 10:40:49 crc kubenswrapper[4811]: I0219 10:40:49.269254 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-x77p9" podUID="ef8d2c93-e114-4869-a2b1-2429228f494c" containerName="registry-server" containerID="cri-o://d1b22c84f98c76a61218ac3e487b4c786f4ec0323db7addbf2bec1c14ac49116" gracePeriod=2 Feb 19 10:40:49 crc kubenswrapper[4811]: I0219 10:40:49.269770 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br5xl" event={"ID":"6563b2b6-b15d-48ca-9c1f-75f9f17321c9","Type":"ContainerDied","Data":"e77e953b2c0073341df9e6025e5224149ba2138e6d041b265b6f8b536508b6e5"} Feb 19 10:40:49 crc kubenswrapper[4811]: I0219 10:40:49.762268 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x77p9" Feb 19 10:40:49 crc kubenswrapper[4811]: I0219 10:40:49.859288 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef8d2c93-e114-4869-a2b1-2429228f494c-utilities\") pod \"ef8d2c93-e114-4869-a2b1-2429228f494c\" (UID: \"ef8d2c93-e114-4869-a2b1-2429228f494c\") " Feb 19 10:40:49 crc kubenswrapper[4811]: I0219 10:40:49.859474 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef8d2c93-e114-4869-a2b1-2429228f494c-catalog-content\") pod \"ef8d2c93-e114-4869-a2b1-2429228f494c\" (UID: \"ef8d2c93-e114-4869-a2b1-2429228f494c\") " Feb 19 10:40:49 crc kubenswrapper[4811]: I0219 10:40:49.859832 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jv8nx\" (UniqueName: \"kubernetes.io/projected/ef8d2c93-e114-4869-a2b1-2429228f494c-kube-api-access-jv8nx\") pod \"ef8d2c93-e114-4869-a2b1-2429228f494c\" (UID: \"ef8d2c93-e114-4869-a2b1-2429228f494c\") " Feb 19 10:40:49 crc kubenswrapper[4811]: I0219 10:40:49.860309 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef8d2c93-e114-4869-a2b1-2429228f494c-utilities" (OuterVolumeSpecName: "utilities") pod "ef8d2c93-e114-4869-a2b1-2429228f494c" (UID: "ef8d2c93-e114-4869-a2b1-2429228f494c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:40:49 crc kubenswrapper[4811]: I0219 10:40:49.860696 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef8d2c93-e114-4869-a2b1-2429228f494c-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:40:49 crc kubenswrapper[4811]: I0219 10:40:49.866442 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef8d2c93-e114-4869-a2b1-2429228f494c-kube-api-access-jv8nx" (OuterVolumeSpecName: "kube-api-access-jv8nx") pod "ef8d2c93-e114-4869-a2b1-2429228f494c" (UID: "ef8d2c93-e114-4869-a2b1-2429228f494c"). InnerVolumeSpecName "kube-api-access-jv8nx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:40:49 crc kubenswrapper[4811]: I0219 10:40:49.891434 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef8d2c93-e114-4869-a2b1-2429228f494c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef8d2c93-e114-4869-a2b1-2429228f494c" (UID: "ef8d2c93-e114-4869-a2b1-2429228f494c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:40:49 crc kubenswrapper[4811]: I0219 10:40:49.963207 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef8d2c93-e114-4869-a2b1-2429228f494c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:40:49 crc kubenswrapper[4811]: I0219 10:40:49.963241 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jv8nx\" (UniqueName: \"kubernetes.io/projected/ef8d2c93-e114-4869-a2b1-2429228f494c-kube-api-access-jv8nx\") on node \"crc\" DevicePath \"\"" Feb 19 10:40:50 crc kubenswrapper[4811]: I0219 10:40:50.136359 4811 scope.go:117] "RemoveContainer" containerID="797124afab4855364d9a11b8d2b68ce1c0f981de88f73d7cf60c3e68bd4e8b0f" Feb 19 10:40:50 crc kubenswrapper[4811]: I0219 10:40:50.283674 4811 generic.go:334] "Generic (PLEG): container finished" podID="ef8d2c93-e114-4869-a2b1-2429228f494c" containerID="d1b22c84f98c76a61218ac3e487b4c786f4ec0323db7addbf2bec1c14ac49116" exitCode=0 Feb 19 10:40:50 crc kubenswrapper[4811]: I0219 10:40:50.283788 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x77p9" event={"ID":"ef8d2c93-e114-4869-a2b1-2429228f494c","Type":"ContainerDied","Data":"d1b22c84f98c76a61218ac3e487b4c786f4ec0323db7addbf2bec1c14ac49116"} Feb 19 10:40:50 crc kubenswrapper[4811]: I0219 10:40:50.283830 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x77p9" Feb 19 10:40:50 crc kubenswrapper[4811]: I0219 10:40:50.283851 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x77p9" event={"ID":"ef8d2c93-e114-4869-a2b1-2429228f494c","Type":"ContainerDied","Data":"64ca8ec4f4936e24a804ef6a8115583d95a0856cb41432391c8c974ed6efc5a8"} Feb 19 10:40:50 crc kubenswrapper[4811]: I0219 10:40:50.283912 4811 scope.go:117] "RemoveContainer" containerID="d1b22c84f98c76a61218ac3e487b4c786f4ec0323db7addbf2bec1c14ac49116" Feb 19 10:40:50 crc kubenswrapper[4811]: I0219 10:40:50.308856 4811 scope.go:117] "RemoveContainer" containerID="b7c1881af88891f9869a6fd238a5996a0beda7698ad0538d2118065e616a5911" Feb 19 10:40:50 crc kubenswrapper[4811]: I0219 10:40:50.336871 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x77p9"] Feb 19 10:40:50 crc kubenswrapper[4811]: I0219 10:40:50.348147 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-x77p9"] Feb 19 10:40:50 crc kubenswrapper[4811]: I0219 10:40:50.354475 4811 scope.go:117] "RemoveContainer" containerID="6df524b6167df1faa03ac0bdcb61ecd69fa177a3b2135807d55c713bcce49241" Feb 19 10:40:50 crc kubenswrapper[4811]: I0219 10:40:50.394290 4811 scope.go:117] "RemoveContainer" containerID="d1b22c84f98c76a61218ac3e487b4c786f4ec0323db7addbf2bec1c14ac49116" Feb 19 10:40:50 crc kubenswrapper[4811]: E0219 10:40:50.394805 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1b22c84f98c76a61218ac3e487b4c786f4ec0323db7addbf2bec1c14ac49116\": container with ID starting with d1b22c84f98c76a61218ac3e487b4c786f4ec0323db7addbf2bec1c14ac49116 not found: ID does not exist" containerID="d1b22c84f98c76a61218ac3e487b4c786f4ec0323db7addbf2bec1c14ac49116" Feb 19 10:40:50 crc kubenswrapper[4811]: I0219 10:40:50.394858 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1b22c84f98c76a61218ac3e487b4c786f4ec0323db7addbf2bec1c14ac49116"} err="failed to get container status \"d1b22c84f98c76a61218ac3e487b4c786f4ec0323db7addbf2bec1c14ac49116\": rpc error: code = NotFound desc = could not find container \"d1b22c84f98c76a61218ac3e487b4c786f4ec0323db7addbf2bec1c14ac49116\": container with ID starting with d1b22c84f98c76a61218ac3e487b4c786f4ec0323db7addbf2bec1c14ac49116 not found: ID does not exist" Feb 19 10:40:50 crc kubenswrapper[4811]: I0219 10:40:50.394897 4811 scope.go:117] "RemoveContainer" containerID="b7c1881af88891f9869a6fd238a5996a0beda7698ad0538d2118065e616a5911" Feb 19 10:40:50 crc kubenswrapper[4811]: E0219 10:40:50.398159 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7c1881af88891f9869a6fd238a5996a0beda7698ad0538d2118065e616a5911\": container with ID starting with b7c1881af88891f9869a6fd238a5996a0beda7698ad0538d2118065e616a5911 not found: ID does not exist" containerID="b7c1881af88891f9869a6fd238a5996a0beda7698ad0538d2118065e616a5911" Feb 19 10:40:50 crc kubenswrapper[4811]: I0219 10:40:50.398212 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7c1881af88891f9869a6fd238a5996a0beda7698ad0538d2118065e616a5911"} err="failed to get container status \"b7c1881af88891f9869a6fd238a5996a0beda7698ad0538d2118065e616a5911\": rpc error: code = NotFound desc = could not find container \"b7c1881af88891f9869a6fd238a5996a0beda7698ad0538d2118065e616a5911\": container with ID starting with b7c1881af88891f9869a6fd238a5996a0beda7698ad0538d2118065e616a5911 not found: ID does not exist" Feb 19 10:40:50 crc kubenswrapper[4811]: I0219 10:40:50.398248 4811 scope.go:117] "RemoveContainer" containerID="6df524b6167df1faa03ac0bdcb61ecd69fa177a3b2135807d55c713bcce49241" Feb 19 10:40:50 crc kubenswrapper[4811]: E0219 10:40:50.400362 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6df524b6167df1faa03ac0bdcb61ecd69fa177a3b2135807d55c713bcce49241\": container with ID starting with 6df524b6167df1faa03ac0bdcb61ecd69fa177a3b2135807d55c713bcce49241 not found: ID does not exist" containerID="6df524b6167df1faa03ac0bdcb61ecd69fa177a3b2135807d55c713bcce49241" Feb 19 10:40:50 crc kubenswrapper[4811]: I0219 10:40:50.400423 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6df524b6167df1faa03ac0bdcb61ecd69fa177a3b2135807d55c713bcce49241"} err="failed to get container status \"6df524b6167df1faa03ac0bdcb61ecd69fa177a3b2135807d55c713bcce49241\": rpc error: code = NotFound desc = could not find container \"6df524b6167df1faa03ac0bdcb61ecd69fa177a3b2135807d55c713bcce49241\": container with ID starting with 6df524b6167df1faa03ac0bdcb61ecd69fa177a3b2135807d55c713bcce49241 not found: ID does not exist" Feb 19 10:40:50 crc kubenswrapper[4811]: I0219 10:40:50.592887 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef8d2c93-e114-4869-a2b1-2429228f494c" path="/var/lib/kubelet/pods/ef8d2c93-e114-4869-a2b1-2429228f494c/volumes" Feb 19 10:40:50 crc kubenswrapper[4811]: I0219 10:40:50.893297 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br5xl" Feb 19 10:40:50 crc kubenswrapper[4811]: I0219 10:40:50.982890 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6563b2b6-b15d-48ca-9c1f-75f9f17321c9-inventory\") pod \"6563b2b6-b15d-48ca-9c1f-75f9f17321c9\" (UID: \"6563b2b6-b15d-48ca-9c1f-75f9f17321c9\") " Feb 19 10:40:50 crc kubenswrapper[4811]: I0219 10:40:50.983105 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6563b2b6-b15d-48ca-9c1f-75f9f17321c9-ssh-key-openstack-edpm-ipam\") pod \"6563b2b6-b15d-48ca-9c1f-75f9f17321c9\" (UID: \"6563b2b6-b15d-48ca-9c1f-75f9f17321c9\") " Feb 19 10:40:50 crc kubenswrapper[4811]: I0219 10:40:50.983143 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrngd\" (UniqueName: \"kubernetes.io/projected/6563b2b6-b15d-48ca-9c1f-75f9f17321c9-kube-api-access-vrngd\") pod \"6563b2b6-b15d-48ca-9c1f-75f9f17321c9\" (UID: \"6563b2b6-b15d-48ca-9c1f-75f9f17321c9\") " Feb 19 10:40:50 crc kubenswrapper[4811]: I0219 10:40:50.988873 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6563b2b6-b15d-48ca-9c1f-75f9f17321c9-kube-api-access-vrngd" (OuterVolumeSpecName: "kube-api-access-vrngd") pod "6563b2b6-b15d-48ca-9c1f-75f9f17321c9" (UID: "6563b2b6-b15d-48ca-9c1f-75f9f17321c9"). InnerVolumeSpecName "kube-api-access-vrngd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.011856 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6563b2b6-b15d-48ca-9c1f-75f9f17321c9-inventory" (OuterVolumeSpecName: "inventory") pod "6563b2b6-b15d-48ca-9c1f-75f9f17321c9" (UID: "6563b2b6-b15d-48ca-9c1f-75f9f17321c9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.017557 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6563b2b6-b15d-48ca-9c1f-75f9f17321c9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6563b2b6-b15d-48ca-9c1f-75f9f17321c9" (UID: "6563b2b6-b15d-48ca-9c1f-75f9f17321c9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.085508 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6563b2b6-b15d-48ca-9c1f-75f9f17321c9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.085859 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrngd\" (UniqueName: \"kubernetes.io/projected/6563b2b6-b15d-48ca-9c1f-75f9f17321c9-kube-api-access-vrngd\") on node \"crc\" DevicePath \"\"" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.085870 4811 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6563b2b6-b15d-48ca-9c1f-75f9f17321c9-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.296327 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br5xl" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.296336 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-br5xl" event={"ID":"6563b2b6-b15d-48ca-9c1f-75f9f17321c9","Type":"ContainerDied","Data":"986f5b4d5f7c2147b002d16c2ba654df12e1751cdac806e9124d878aef7fc9a0"} Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.296546 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="986f5b4d5f7c2147b002d16c2ba654df12e1751cdac806e9124d878aef7fc9a0" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.369285 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8"] Feb 19 10:40:51 crc kubenswrapper[4811]: E0219 10:40:51.369747 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef8d2c93-e114-4869-a2b1-2429228f494c" containerName="extract-content" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.369766 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef8d2c93-e114-4869-a2b1-2429228f494c" containerName="extract-content" Feb 19 10:40:51 crc kubenswrapper[4811]: E0219 10:40:51.369775 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef8d2c93-e114-4869-a2b1-2429228f494c" containerName="extract-utilities" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.369781 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef8d2c93-e114-4869-a2b1-2429228f494c" containerName="extract-utilities" Feb 19 10:40:51 crc kubenswrapper[4811]: E0219 10:40:51.369809 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6563b2b6-b15d-48ca-9c1f-75f9f17321c9" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.369818 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="6563b2b6-b15d-48ca-9c1f-75f9f17321c9" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 19 10:40:51 crc kubenswrapper[4811]: E0219 10:40:51.369831 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef8d2c93-e114-4869-a2b1-2429228f494c" containerName="registry-server" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.369843 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef8d2c93-e114-4869-a2b1-2429228f494c" containerName="registry-server" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.370057 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef8d2c93-e114-4869-a2b1-2429228f494c" containerName="registry-server" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.370107 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="6563b2b6-b15d-48ca-9c1f-75f9f17321c9" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.370888 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.374001 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wtknk" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.374121 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.374418 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.374840 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.377417 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.377497 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.380518 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.387633 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.394620 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8"] Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.498863 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.498983 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.499059 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d303b5a-179e-407e-8dae-2eab291880d5-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.499087 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.499132 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.499157 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.499259 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.499371 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.499546 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.499737 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.499992 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d303b5a-179e-407e-8dae-2eab291880d5-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.500019 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2h45\" (UniqueName: \"kubernetes.io/projected/2d303b5a-179e-407e-8dae-2eab291880d5-kube-api-access-p2h45\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.500131 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d303b5a-179e-407e-8dae-2eab291880d5-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.500195 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d303b5a-179e-407e-8dae-2eab291880d5-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.602898 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d303b5a-179e-407e-8dae-2eab291880d5-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.602988 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d303b5a-179e-407e-8dae-2eab291880d5-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.603046 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.603095 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.603174 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d303b5a-179e-407e-8dae-2eab291880d5-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.603201 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.603243 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.603279 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.603306 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.603339 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.603379 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.603429 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.603515 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d303b5a-179e-407e-8dae-2eab291880d5-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.603544 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2h45\" (UniqueName: \"kubernetes.io/projected/2d303b5a-179e-407e-8dae-2eab291880d5-kube-api-access-p2h45\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.609131 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.609872 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d303b5a-179e-407e-8dae-2eab291880d5-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.610372 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d303b5a-179e-407e-8dae-2eab291880d5-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.610754 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.610861 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.610894 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.611284 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.612015 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.612997 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.613109 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d303b5a-179e-407e-8dae-2eab291880d5-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.613414 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.614796 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d303b5a-179e-407e-8dae-2eab291880d5-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.614962 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.622118 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2h45\" (UniqueName: \"kubernetes.io/projected/2d303b5a-179e-407e-8dae-2eab291880d5-kube-api-access-p2h45\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:51 crc kubenswrapper[4811]: I0219 10:40:51.704653 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:40:52 crc kubenswrapper[4811]: I0219 10:40:52.285756 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8"] Feb 19 10:40:52 crc kubenswrapper[4811]: W0219 10:40:52.293099 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d303b5a_179e_407e_8dae_2eab291880d5.slice/crio-0967f8330972bec270ceaba64acf9e7d1494e0bb836062789973e3c3f8df0eda WatchSource:0}: Error finding container 0967f8330972bec270ceaba64acf9e7d1494e0bb836062789973e3c3f8df0eda: Status 404 returned error can't find the container with id 0967f8330972bec270ceaba64acf9e7d1494e0bb836062789973e3c3f8df0eda Feb 19 10:40:52 crc kubenswrapper[4811]: I0219 10:40:52.308577 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" event={"ID":"2d303b5a-179e-407e-8dae-2eab291880d5","Type":"ContainerStarted","Data":"0967f8330972bec270ceaba64acf9e7d1494e0bb836062789973e3c3f8df0eda"} Feb 19 10:40:52 crc kubenswrapper[4811]: I0219 10:40:52.539874 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jjrh2"] Feb 19 10:40:52 crc kubenswrapper[4811]: I0219 10:40:52.540189 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jjrh2" podUID="fb5be783-336b-4e72-b544-003e4ec3e969" containerName="registry-server" containerID="cri-o://c93a1b2b5354e526f11658811563bb6815c289aac1cf8a740133d1aa19787883" gracePeriod=2 Feb 19 10:40:53 crc kubenswrapper[4811]: I0219 10:40:53.002958 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jjrh2" Feb 19 10:40:53 crc kubenswrapper[4811]: I0219 10:40:53.138428 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb5be783-336b-4e72-b544-003e4ec3e969-catalog-content\") pod \"fb5be783-336b-4e72-b544-003e4ec3e969\" (UID: \"fb5be783-336b-4e72-b544-003e4ec3e969\") " Feb 19 10:40:53 crc kubenswrapper[4811]: I0219 10:40:53.138495 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb5be783-336b-4e72-b544-003e4ec3e969-utilities\") pod \"fb5be783-336b-4e72-b544-003e4ec3e969\" (UID: \"fb5be783-336b-4e72-b544-003e4ec3e969\") " Feb 19 10:40:53 crc kubenswrapper[4811]: I0219 10:40:53.138520 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-764kg\" (UniqueName: \"kubernetes.io/projected/fb5be783-336b-4e72-b544-003e4ec3e969-kube-api-access-764kg\") pod \"fb5be783-336b-4e72-b544-003e4ec3e969\" (UID: \"fb5be783-336b-4e72-b544-003e4ec3e969\") " Feb 19 10:40:53 crc kubenswrapper[4811]: I0219 10:40:53.139506 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb5be783-336b-4e72-b544-003e4ec3e969-utilities" (OuterVolumeSpecName: "utilities") pod "fb5be783-336b-4e72-b544-003e4ec3e969" (UID: "fb5be783-336b-4e72-b544-003e4ec3e969"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:40:53 crc kubenswrapper[4811]: I0219 10:40:53.140140 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb5be783-336b-4e72-b544-003e4ec3e969-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:40:53 crc kubenswrapper[4811]: I0219 10:40:53.143803 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb5be783-336b-4e72-b544-003e4ec3e969-kube-api-access-764kg" (OuterVolumeSpecName: "kube-api-access-764kg") pod "fb5be783-336b-4e72-b544-003e4ec3e969" (UID: "fb5be783-336b-4e72-b544-003e4ec3e969"). InnerVolumeSpecName "kube-api-access-764kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:40:53 crc kubenswrapper[4811]: I0219 10:40:53.190170 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb5be783-336b-4e72-b544-003e4ec3e969-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fb5be783-336b-4e72-b544-003e4ec3e969" (UID: "fb5be783-336b-4e72-b544-003e4ec3e969"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:40:53 crc kubenswrapper[4811]: I0219 10:40:53.244998 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb5be783-336b-4e72-b544-003e4ec3e969-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:40:53 crc kubenswrapper[4811]: I0219 10:40:53.245070 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-764kg\" (UniqueName: \"kubernetes.io/projected/fb5be783-336b-4e72-b544-003e4ec3e969-kube-api-access-764kg\") on node \"crc\" DevicePath \"\"" Feb 19 10:40:53 crc kubenswrapper[4811]: I0219 10:40:53.321931 4811 generic.go:334] "Generic (PLEG): container finished" podID="fb5be783-336b-4e72-b544-003e4ec3e969" containerID="c93a1b2b5354e526f11658811563bb6815c289aac1cf8a740133d1aa19787883" exitCode=0 Feb 19 10:40:53 crc kubenswrapper[4811]: I0219 10:40:53.321987 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjrh2" event={"ID":"fb5be783-336b-4e72-b544-003e4ec3e969","Type":"ContainerDied","Data":"c93a1b2b5354e526f11658811563bb6815c289aac1cf8a740133d1aa19787883"} Feb 19 10:40:53 crc kubenswrapper[4811]: I0219 10:40:53.322038 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jjrh2" Feb 19 10:40:53 crc kubenswrapper[4811]: I0219 10:40:53.322059 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjrh2" event={"ID":"fb5be783-336b-4e72-b544-003e4ec3e969","Type":"ContainerDied","Data":"3688d421c46b3c551e2e10732f40b7184ee0d50cc72c33d7756e80f111bcd01e"} Feb 19 10:40:53 crc kubenswrapper[4811]: I0219 10:40:53.322085 4811 scope.go:117] "RemoveContainer" containerID="c93a1b2b5354e526f11658811563bb6815c289aac1cf8a740133d1aa19787883" Feb 19 10:40:53 crc kubenswrapper[4811]: I0219 10:40:53.356773 4811 scope.go:117] "RemoveContainer" containerID="a90722a4328b4506d9cb964544e45255b6fbcec53a99a89d3269f3c4f146bd38" Feb 19 10:40:53 crc kubenswrapper[4811]: I0219 10:40:53.384280 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jjrh2"] Feb 19 10:40:53 crc kubenswrapper[4811]: I0219 10:40:53.386248 4811 scope.go:117] "RemoveContainer" containerID="9265337708bcc10c2fb01fdec02d88552de5e65dabc6671a02e59d2c9d6e2fb4" Feb 19 10:40:53 crc kubenswrapper[4811]: I0219 10:40:53.396589 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jjrh2"] Feb 19 10:40:53 crc kubenswrapper[4811]: I0219 10:40:53.422492 4811 scope.go:117] "RemoveContainer" containerID="c93a1b2b5354e526f11658811563bb6815c289aac1cf8a740133d1aa19787883" Feb 19 10:40:53 crc kubenswrapper[4811]: E0219 10:40:53.423246 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c93a1b2b5354e526f11658811563bb6815c289aac1cf8a740133d1aa19787883\": container with ID starting with c93a1b2b5354e526f11658811563bb6815c289aac1cf8a740133d1aa19787883 not found: ID does not exist" containerID="c93a1b2b5354e526f11658811563bb6815c289aac1cf8a740133d1aa19787883" Feb 19 10:40:53 crc kubenswrapper[4811]: I0219 10:40:53.423338 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c93a1b2b5354e526f11658811563bb6815c289aac1cf8a740133d1aa19787883"} err="failed to get container status \"c93a1b2b5354e526f11658811563bb6815c289aac1cf8a740133d1aa19787883\": rpc error: code = NotFound desc = could not find container \"c93a1b2b5354e526f11658811563bb6815c289aac1cf8a740133d1aa19787883\": container with ID starting with c93a1b2b5354e526f11658811563bb6815c289aac1cf8a740133d1aa19787883 not found: ID does not exist" Feb 19 10:40:53 crc kubenswrapper[4811]: I0219 10:40:53.423381 4811 scope.go:117] "RemoveContainer" containerID="a90722a4328b4506d9cb964544e45255b6fbcec53a99a89d3269f3c4f146bd38" Feb 19 10:40:53 crc kubenswrapper[4811]: E0219 10:40:53.424059 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a90722a4328b4506d9cb964544e45255b6fbcec53a99a89d3269f3c4f146bd38\": container with ID starting with a90722a4328b4506d9cb964544e45255b6fbcec53a99a89d3269f3c4f146bd38 not found: ID does not exist" containerID="a90722a4328b4506d9cb964544e45255b6fbcec53a99a89d3269f3c4f146bd38" Feb 19 10:40:53 crc kubenswrapper[4811]: I0219 10:40:53.424107 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a90722a4328b4506d9cb964544e45255b6fbcec53a99a89d3269f3c4f146bd38"} err="failed to get container status \"a90722a4328b4506d9cb964544e45255b6fbcec53a99a89d3269f3c4f146bd38\": rpc error: code = NotFound desc = could not find container \"a90722a4328b4506d9cb964544e45255b6fbcec53a99a89d3269f3c4f146bd38\": container with ID starting with a90722a4328b4506d9cb964544e45255b6fbcec53a99a89d3269f3c4f146bd38 not found: ID does not exist" Feb 19 10:40:53 crc kubenswrapper[4811]: I0219 10:40:53.424146 4811 scope.go:117] "RemoveContainer" containerID="9265337708bcc10c2fb01fdec02d88552de5e65dabc6671a02e59d2c9d6e2fb4" Feb 19 10:40:53 crc kubenswrapper[4811]: E0219 10:40:53.424558 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9265337708bcc10c2fb01fdec02d88552de5e65dabc6671a02e59d2c9d6e2fb4\": container with ID starting with 9265337708bcc10c2fb01fdec02d88552de5e65dabc6671a02e59d2c9d6e2fb4 not found: ID does not exist" containerID="9265337708bcc10c2fb01fdec02d88552de5e65dabc6671a02e59d2c9d6e2fb4" Feb 19 10:40:53 crc kubenswrapper[4811]: I0219 10:40:53.424596 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9265337708bcc10c2fb01fdec02d88552de5e65dabc6671a02e59d2c9d6e2fb4"} err="failed to get container status \"9265337708bcc10c2fb01fdec02d88552de5e65dabc6671a02e59d2c9d6e2fb4\": rpc error: code = NotFound desc = could not find container \"9265337708bcc10c2fb01fdec02d88552de5e65dabc6671a02e59d2c9d6e2fb4\": container with ID starting with 9265337708bcc10c2fb01fdec02d88552de5e65dabc6671a02e59d2c9d6e2fb4 not found: ID does not exist" Feb 19 10:40:54 crc kubenswrapper[4811]: I0219 10:40:54.337029 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" event={"ID":"2d303b5a-179e-407e-8dae-2eab291880d5","Type":"ContainerStarted","Data":"18101c3e77a55e21f509191381fb11425fd79deb5c7dd7525b01cf5106085227"} Feb 19 10:40:54 crc kubenswrapper[4811]: I0219 10:40:54.598338 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb5be783-336b-4e72-b544-003e4ec3e969" path="/var/lib/kubelet/pods/fb5be783-336b-4e72-b544-003e4ec3e969/volumes" Feb 19 10:40:55 crc kubenswrapper[4811]: I0219 10:40:55.583846 4811 scope.go:117] "RemoveContainer" containerID="3963827d2c3359bad8b876f1ddce43e835b48fcca50fea9ad20004f5e029bba9" Feb 19 10:40:55 crc kubenswrapper[4811]: E0219 10:40:55.584180 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:40:56 crc kubenswrapper[4811]: I0219 10:40:56.321054 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6vjdr" podUID="eb37647d-32f6-4f37-865d-e67077b17ba4" containerName="registry-server" probeResult="failure" output=< Feb 19 10:40:56 crc kubenswrapper[4811]: timeout: failed to connect service ":50051" within 1s Feb 19 10:40:56 crc kubenswrapper[4811]: > Feb 19 10:41:05 crc kubenswrapper[4811]: I0219 10:41:05.324063 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6vjdr" Feb 19 10:41:05 crc kubenswrapper[4811]: I0219 10:41:05.357579 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" podStartSLOduration=13.448176298 podStartE2EDuration="14.357553267s" podCreationTimestamp="2026-02-19 10:40:51 +0000 UTC" firstStartedPulling="2026-02-19 10:40:52.296765856 +0000 UTC m=+1940.823230542" lastFinishedPulling="2026-02-19 10:40:53.206142825 +0000 UTC m=+1941.732607511" observedRunningTime="2026-02-19 10:40:54.360461948 +0000 UTC m=+1942.886926654" watchObservedRunningTime="2026-02-19 10:41:05.357553267 +0000 UTC m=+1953.884017953" Feb 19 10:41:05 crc kubenswrapper[4811]: I0219 10:41:05.386584 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6vjdr" Feb 19 10:41:05 crc kubenswrapper[4811]: I0219 10:41:05.579229 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6vjdr"] Feb 19 10:41:06 crc kubenswrapper[4811]: I0219 10:41:06.460090 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6vjdr" podUID="eb37647d-32f6-4f37-865d-e67077b17ba4" containerName="registry-server" containerID="cri-o://f1907f5e2369753b748154ef0c4238825fc92ae4819ad33e0a5bf9c0d73b583a" gracePeriod=2 Feb 19 10:41:07 crc kubenswrapper[4811]: I0219 10:41:07.436846 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6vjdr" Feb 19 10:41:07 crc kubenswrapper[4811]: I0219 10:41:07.490986 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb37647d-32f6-4f37-865d-e67077b17ba4-catalog-content\") pod \"eb37647d-32f6-4f37-865d-e67077b17ba4\" (UID: \"eb37647d-32f6-4f37-865d-e67077b17ba4\") " Feb 19 10:41:07 crc kubenswrapper[4811]: I0219 10:41:07.491346 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb37647d-32f6-4f37-865d-e67077b17ba4-utilities\") pod \"eb37647d-32f6-4f37-865d-e67077b17ba4\" (UID: \"eb37647d-32f6-4f37-865d-e67077b17ba4\") " Feb 19 10:41:07 crc kubenswrapper[4811]: I0219 10:41:07.491467 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48p7v\" (UniqueName: \"kubernetes.io/projected/eb37647d-32f6-4f37-865d-e67077b17ba4-kube-api-access-48p7v\") pod \"eb37647d-32f6-4f37-865d-e67077b17ba4\" (UID: \"eb37647d-32f6-4f37-865d-e67077b17ba4\") " Feb 19 10:41:07 crc kubenswrapper[4811]: I0219 10:41:07.494242 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb37647d-32f6-4f37-865d-e67077b17ba4-utilities" (OuterVolumeSpecName: "utilities") pod "eb37647d-32f6-4f37-865d-e67077b17ba4" (UID: "eb37647d-32f6-4f37-865d-e67077b17ba4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:41:07 crc kubenswrapper[4811]: I0219 10:41:07.499909 4811 generic.go:334] "Generic (PLEG): container finished" podID="eb37647d-32f6-4f37-865d-e67077b17ba4" containerID="f1907f5e2369753b748154ef0c4238825fc92ae4819ad33e0a5bf9c0d73b583a" exitCode=0 Feb 19 10:41:07 crc kubenswrapper[4811]: I0219 10:41:07.499986 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vjdr" event={"ID":"eb37647d-32f6-4f37-865d-e67077b17ba4","Type":"ContainerDied","Data":"f1907f5e2369753b748154ef0c4238825fc92ae4819ad33e0a5bf9c0d73b583a"} Feb 19 10:41:07 crc kubenswrapper[4811]: I0219 10:41:07.500021 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vjdr" event={"ID":"eb37647d-32f6-4f37-865d-e67077b17ba4","Type":"ContainerDied","Data":"841cd5df8140b3ca4861cfa2b96195db9be6b5a1cba4bb4339e5458bd6382cb3"} Feb 19 10:41:07 crc kubenswrapper[4811]: I0219 10:41:07.500047 4811 scope.go:117] "RemoveContainer" containerID="f1907f5e2369753b748154ef0c4238825fc92ae4819ad33e0a5bf9c0d73b583a" Feb 19 10:41:07 crc kubenswrapper[4811]: I0219 10:41:07.500065 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6vjdr" Feb 19 10:41:07 crc kubenswrapper[4811]: I0219 10:41:07.500255 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb37647d-32f6-4f37-865d-e67077b17ba4-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:41:07 crc kubenswrapper[4811]: I0219 10:41:07.504538 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb37647d-32f6-4f37-865d-e67077b17ba4-kube-api-access-48p7v" (OuterVolumeSpecName: "kube-api-access-48p7v") pod "eb37647d-32f6-4f37-865d-e67077b17ba4" (UID: "eb37647d-32f6-4f37-865d-e67077b17ba4"). InnerVolumeSpecName "kube-api-access-48p7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:41:07 crc kubenswrapper[4811]: I0219 10:41:07.558269 4811 scope.go:117] "RemoveContainer" containerID="ff24cca4a84ff4357a6837dfcb7505d4337a06fcddf0fe3f8738c18927df8651" Feb 19 10:41:07 crc kubenswrapper[4811]: I0219 10:41:07.588539 4811 scope.go:117] "RemoveContainer" containerID="32287af19a1972bbc67b5965c314f0cd3bd4b22af17fa76f324a2bfa364b7ad0" Feb 19 10:41:07 crc kubenswrapper[4811]: I0219 10:41:07.603373 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48p7v\" (UniqueName: \"kubernetes.io/projected/eb37647d-32f6-4f37-865d-e67077b17ba4-kube-api-access-48p7v\") on node \"crc\" DevicePath \"\"" Feb 19 10:41:07 crc kubenswrapper[4811]: I0219 10:41:07.633105 4811 scope.go:117] "RemoveContainer" containerID="f1907f5e2369753b748154ef0c4238825fc92ae4819ad33e0a5bf9c0d73b583a" Feb 19 10:41:07 crc kubenswrapper[4811]: E0219 10:41:07.633767 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1907f5e2369753b748154ef0c4238825fc92ae4819ad33e0a5bf9c0d73b583a\": container with ID starting with f1907f5e2369753b748154ef0c4238825fc92ae4819ad33e0a5bf9c0d73b583a not found: ID does not exist" containerID="f1907f5e2369753b748154ef0c4238825fc92ae4819ad33e0a5bf9c0d73b583a" Feb 19 10:41:07 crc kubenswrapper[4811]: I0219 10:41:07.633819 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1907f5e2369753b748154ef0c4238825fc92ae4819ad33e0a5bf9c0d73b583a"} err="failed to get container status \"f1907f5e2369753b748154ef0c4238825fc92ae4819ad33e0a5bf9c0d73b583a\": rpc error: code = NotFound desc = could not find container \"f1907f5e2369753b748154ef0c4238825fc92ae4819ad33e0a5bf9c0d73b583a\": container with ID starting with f1907f5e2369753b748154ef0c4238825fc92ae4819ad33e0a5bf9c0d73b583a not found: ID does not exist" Feb 19 10:41:07 crc kubenswrapper[4811]: I0219 10:41:07.633859 4811 scope.go:117] "RemoveContainer" containerID="ff24cca4a84ff4357a6837dfcb7505d4337a06fcddf0fe3f8738c18927df8651" Feb 19 10:41:07 crc kubenswrapper[4811]: E0219 10:41:07.634298 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff24cca4a84ff4357a6837dfcb7505d4337a06fcddf0fe3f8738c18927df8651\": container with ID starting with ff24cca4a84ff4357a6837dfcb7505d4337a06fcddf0fe3f8738c18927df8651 not found: ID does not exist" containerID="ff24cca4a84ff4357a6837dfcb7505d4337a06fcddf0fe3f8738c18927df8651" Feb 19 10:41:07 crc kubenswrapper[4811]: I0219 10:41:07.634338 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff24cca4a84ff4357a6837dfcb7505d4337a06fcddf0fe3f8738c18927df8651"} err="failed to get container status \"ff24cca4a84ff4357a6837dfcb7505d4337a06fcddf0fe3f8738c18927df8651\": rpc error: code = NotFound desc = could not find container \"ff24cca4a84ff4357a6837dfcb7505d4337a06fcddf0fe3f8738c18927df8651\": container with ID starting with ff24cca4a84ff4357a6837dfcb7505d4337a06fcddf0fe3f8738c18927df8651 not found: ID does not exist" Feb 19 10:41:07 crc kubenswrapper[4811]: I0219 10:41:07.634359 4811 scope.go:117] "RemoveContainer" containerID="32287af19a1972bbc67b5965c314f0cd3bd4b22af17fa76f324a2bfa364b7ad0" Feb 19 10:41:07 crc kubenswrapper[4811]: E0219 10:41:07.634718 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32287af19a1972bbc67b5965c314f0cd3bd4b22af17fa76f324a2bfa364b7ad0\": container with ID starting with 32287af19a1972bbc67b5965c314f0cd3bd4b22af17fa76f324a2bfa364b7ad0 not found: ID does not exist" containerID="32287af19a1972bbc67b5965c314f0cd3bd4b22af17fa76f324a2bfa364b7ad0" Feb 19 10:41:07 crc kubenswrapper[4811]: I0219 10:41:07.634755 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32287af19a1972bbc67b5965c314f0cd3bd4b22af17fa76f324a2bfa364b7ad0"} err="failed to get container status \"32287af19a1972bbc67b5965c314f0cd3bd4b22af17fa76f324a2bfa364b7ad0\": rpc error: code = NotFound desc = could not find container \"32287af19a1972bbc67b5965c314f0cd3bd4b22af17fa76f324a2bfa364b7ad0\": container with ID starting with 32287af19a1972bbc67b5965c314f0cd3bd4b22af17fa76f324a2bfa364b7ad0 not found: ID does not exist" Feb 19 10:41:07 crc kubenswrapper[4811]: I0219 10:41:07.636195 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb37647d-32f6-4f37-865d-e67077b17ba4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb37647d-32f6-4f37-865d-e67077b17ba4" (UID: "eb37647d-32f6-4f37-865d-e67077b17ba4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:41:07 crc kubenswrapper[4811]: I0219 10:41:07.705872 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb37647d-32f6-4f37-865d-e67077b17ba4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:41:07 crc kubenswrapper[4811]: I0219 10:41:07.842603 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6vjdr"] Feb 19 10:41:07 crc kubenswrapper[4811]: I0219 10:41:07.850959 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6vjdr"] Feb 19 10:41:08 crc kubenswrapper[4811]: I0219 10:41:08.594816 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb37647d-32f6-4f37-865d-e67077b17ba4" path="/var/lib/kubelet/pods/eb37647d-32f6-4f37-865d-e67077b17ba4/volumes" Feb 19 10:41:09 crc kubenswrapper[4811]: I0219 10:41:09.583941 4811 scope.go:117] "RemoveContainer" containerID="3963827d2c3359bad8b876f1ddce43e835b48fcca50fea9ad20004f5e029bba9" Feb 19 10:41:09 crc kubenswrapper[4811]: E0219 10:41:09.584829 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:41:24 crc kubenswrapper[4811]: I0219 10:41:24.584252 4811 scope.go:117] "RemoveContainer" containerID="3963827d2c3359bad8b876f1ddce43e835b48fcca50fea9ad20004f5e029bba9" Feb 19 10:41:24 crc kubenswrapper[4811]: E0219 10:41:24.585431 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:41:32 crc kubenswrapper[4811]: I0219 10:41:32.755390 4811 generic.go:334] "Generic (PLEG): container finished" podID="2d303b5a-179e-407e-8dae-2eab291880d5" containerID="18101c3e77a55e21f509191381fb11425fd79deb5c7dd7525b01cf5106085227" exitCode=0 Feb 19 10:41:32 crc kubenswrapper[4811]: I0219 10:41:32.755507 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" event={"ID":"2d303b5a-179e-407e-8dae-2eab291880d5","Type":"ContainerDied","Data":"18101c3e77a55e21f509191381fb11425fd79deb5c7dd7525b01cf5106085227"} Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.302542 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.414886 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-nova-combined-ca-bundle\") pod \"2d303b5a-179e-407e-8dae-2eab291880d5\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.414992 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d303b5a-179e-407e-8dae-2eab291880d5-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"2d303b5a-179e-407e-8dae-2eab291880d5\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.415255 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-libvirt-combined-ca-bundle\") pod \"2d303b5a-179e-407e-8dae-2eab291880d5\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.415424 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-ssh-key-openstack-edpm-ipam\") pod \"2d303b5a-179e-407e-8dae-2eab291880d5\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.415511 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-inventory\") pod \"2d303b5a-179e-407e-8dae-2eab291880d5\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.415570 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d303b5a-179e-407e-8dae-2eab291880d5-openstack-edpm-ipam-ovn-default-certs-0\") pod \"2d303b5a-179e-407e-8dae-2eab291880d5\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.415665 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d303b5a-179e-407e-8dae-2eab291880d5-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"2d303b5a-179e-407e-8dae-2eab291880d5\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.415736 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2h45\" (UniqueName: \"kubernetes.io/projected/2d303b5a-179e-407e-8dae-2eab291880d5-kube-api-access-p2h45\") pod \"2d303b5a-179e-407e-8dae-2eab291880d5\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.415846 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-bootstrap-combined-ca-bundle\") pod \"2d303b5a-179e-407e-8dae-2eab291880d5\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.415899 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-repo-setup-combined-ca-bundle\") pod \"2d303b5a-179e-407e-8dae-2eab291880d5\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.415945 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-telemetry-combined-ca-bundle\") pod \"2d303b5a-179e-407e-8dae-2eab291880d5\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.415982 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-ovn-combined-ca-bundle\") pod \"2d303b5a-179e-407e-8dae-2eab291880d5\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.416150 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d303b5a-179e-407e-8dae-2eab291880d5-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"2d303b5a-179e-407e-8dae-2eab291880d5\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.416628 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-neutron-metadata-combined-ca-bundle\") pod \"2d303b5a-179e-407e-8dae-2eab291880d5\" (UID: \"2d303b5a-179e-407e-8dae-2eab291880d5\") " Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.424957 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "2d303b5a-179e-407e-8dae-2eab291880d5" (UID: "2d303b5a-179e-407e-8dae-2eab291880d5"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.425185 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d303b5a-179e-407e-8dae-2eab291880d5-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "2d303b5a-179e-407e-8dae-2eab291880d5" (UID: "2d303b5a-179e-407e-8dae-2eab291880d5"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.426147 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "2d303b5a-179e-407e-8dae-2eab291880d5" (UID: "2d303b5a-179e-407e-8dae-2eab291880d5"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.426944 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "2d303b5a-179e-407e-8dae-2eab291880d5" (UID: "2d303b5a-179e-407e-8dae-2eab291880d5"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.427175 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d303b5a-179e-407e-8dae-2eab291880d5-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "2d303b5a-179e-407e-8dae-2eab291880d5" (UID: "2d303b5a-179e-407e-8dae-2eab291880d5"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.427950 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d303b5a-179e-407e-8dae-2eab291880d5-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "2d303b5a-179e-407e-8dae-2eab291880d5" (UID: "2d303b5a-179e-407e-8dae-2eab291880d5"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.428697 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "2d303b5a-179e-407e-8dae-2eab291880d5" (UID: "2d303b5a-179e-407e-8dae-2eab291880d5"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.431502 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "2d303b5a-179e-407e-8dae-2eab291880d5" (UID: "2d303b5a-179e-407e-8dae-2eab291880d5"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.431520 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d303b5a-179e-407e-8dae-2eab291880d5-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "2d303b5a-179e-407e-8dae-2eab291880d5" (UID: "2d303b5a-179e-407e-8dae-2eab291880d5"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.436984 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d303b5a-179e-407e-8dae-2eab291880d5-kube-api-access-p2h45" (OuterVolumeSpecName: "kube-api-access-p2h45") pod "2d303b5a-179e-407e-8dae-2eab291880d5" (UID: "2d303b5a-179e-407e-8dae-2eab291880d5"). InnerVolumeSpecName "kube-api-access-p2h45". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.437240 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "2d303b5a-179e-407e-8dae-2eab291880d5" (UID: "2d303b5a-179e-407e-8dae-2eab291880d5"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.437399 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "2d303b5a-179e-407e-8dae-2eab291880d5" (UID: "2d303b5a-179e-407e-8dae-2eab291880d5"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.459474 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-inventory" (OuterVolumeSpecName: "inventory") pod "2d303b5a-179e-407e-8dae-2eab291880d5" (UID: "2d303b5a-179e-407e-8dae-2eab291880d5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.461662 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2d303b5a-179e-407e-8dae-2eab291880d5" (UID: "2d303b5a-179e-407e-8dae-2eab291880d5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.520253 4811 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d303b5a-179e-407e-8dae-2eab291880d5-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.520340 4811 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.520361 4811 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.520376 4811 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d303b5a-179e-407e-8dae-2eab291880d5-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.520396 4811 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.520410 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.520423 4811 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.520458 4811 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d303b5a-179e-407e-8dae-2eab291880d5-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.520478 4811 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2d303b5a-179e-407e-8dae-2eab291880d5-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.520492 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2h45\" (UniqueName: \"kubernetes.io/projected/2d303b5a-179e-407e-8dae-2eab291880d5-kube-api-access-p2h45\") on node \"crc\" DevicePath \"\"" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.520504 4811 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.520517 4811 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.520530 4811 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.520543 4811 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d303b5a-179e-407e-8dae-2eab291880d5-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.774799 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" event={"ID":"2d303b5a-179e-407e-8dae-2eab291880d5","Type":"ContainerDied","Data":"0967f8330972bec270ceaba64acf9e7d1494e0bb836062789973e3c3f8df0eda"} Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.775258 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0967f8330972bec270ceaba64acf9e7d1494e0bb836062789973e3c3f8df0eda" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.774859 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.978207 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-m9spn"] Feb 19 10:41:34 crc kubenswrapper[4811]: E0219 10:41:34.978609 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb37647d-32f6-4f37-865d-e67077b17ba4" containerName="registry-server" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.978623 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb37647d-32f6-4f37-865d-e67077b17ba4" containerName="registry-server" Feb 19 10:41:34 crc kubenswrapper[4811]: E0219 10:41:34.978640 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb5be783-336b-4e72-b544-003e4ec3e969" containerName="registry-server" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.978646 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb5be783-336b-4e72-b544-003e4ec3e969" containerName="registry-server" Feb 19 10:41:34 crc kubenswrapper[4811]: E0219 10:41:34.978659 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb37647d-32f6-4f37-865d-e67077b17ba4" containerName="extract-utilities" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.978666 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb37647d-32f6-4f37-865d-e67077b17ba4" containerName="extract-utilities" Feb 19 10:41:34 crc kubenswrapper[4811]: E0219 10:41:34.978681 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb5be783-336b-4e72-b544-003e4ec3e969" containerName="extract-utilities" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.978687 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb5be783-336b-4e72-b544-003e4ec3e969" containerName="extract-utilities" Feb 19 10:41:34 crc kubenswrapper[4811]: E0219 10:41:34.978695 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb37647d-32f6-4f37-865d-e67077b17ba4" containerName="extract-content" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.978701 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb37647d-32f6-4f37-865d-e67077b17ba4" containerName="extract-content" Feb 19 10:41:34 crc kubenswrapper[4811]: E0219 10:41:34.978715 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d303b5a-179e-407e-8dae-2eab291880d5" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.978722 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d303b5a-179e-407e-8dae-2eab291880d5" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 19 10:41:34 crc kubenswrapper[4811]: E0219 10:41:34.978745 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb5be783-336b-4e72-b544-003e4ec3e969" containerName="extract-content" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.978752 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb5be783-336b-4e72-b544-003e4ec3e969" containerName="extract-content" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.978910 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d303b5a-179e-407e-8dae-2eab291880d5" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.978931 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb37647d-32f6-4f37-865d-e67077b17ba4" containerName="registry-server" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.978940 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb5be783-336b-4e72-b544-003e4ec3e969" containerName="registry-server" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.979568 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-m9spn" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.981936 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.982409 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.982770 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.982798 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.991479 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wtknk" Feb 19 10:41:34 crc kubenswrapper[4811]: I0219 10:41:34.993281 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-m9spn"] Feb 19 10:41:35 crc kubenswrapper[4811]: I0219 10:41:35.030279 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9579646-1f18-4b3b-af75-9c5af0107c79-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-m9spn\" (UID: \"f9579646-1f18-4b3b-af75-9c5af0107c79\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-m9spn" Feb 19 10:41:35 crc kubenswrapper[4811]: I0219 10:41:35.030346 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f9579646-1f18-4b3b-af75-9c5af0107c79-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-m9spn\" (UID: \"f9579646-1f18-4b3b-af75-9c5af0107c79\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-m9spn" Feb 19 10:41:35 crc kubenswrapper[4811]: I0219 10:41:35.030380 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtdcz\" (UniqueName: \"kubernetes.io/projected/f9579646-1f18-4b3b-af75-9c5af0107c79-kube-api-access-xtdcz\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-m9spn\" (UID: \"f9579646-1f18-4b3b-af75-9c5af0107c79\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-m9spn" Feb 19 10:41:35 crc kubenswrapper[4811]: I0219 10:41:35.030986 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/f9579646-1f18-4b3b-af75-9c5af0107c79-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-m9spn\" (UID: \"f9579646-1f18-4b3b-af75-9c5af0107c79\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-m9spn" Feb 19 10:41:35 crc kubenswrapper[4811]: I0219 10:41:35.031086 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f9579646-1f18-4b3b-af75-9c5af0107c79-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-m9spn\" (UID: \"f9579646-1f18-4b3b-af75-9c5af0107c79\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-m9spn" Feb 19 10:41:35 crc kubenswrapper[4811]: I0219 10:41:35.133031 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/f9579646-1f18-4b3b-af75-9c5af0107c79-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-m9spn\" (UID: \"f9579646-1f18-4b3b-af75-9c5af0107c79\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-m9spn" Feb 19 10:41:35 crc kubenswrapper[4811]: I0219 10:41:35.134053 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f9579646-1f18-4b3b-af75-9c5af0107c79-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-m9spn\" (UID: \"f9579646-1f18-4b3b-af75-9c5af0107c79\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-m9spn" Feb 19 10:41:35 crc kubenswrapper[4811]: I0219 10:41:35.134096 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9579646-1f18-4b3b-af75-9c5af0107c79-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-m9spn\" (UID: \"f9579646-1f18-4b3b-af75-9c5af0107c79\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-m9spn" Feb 19 10:41:35 crc kubenswrapper[4811]: I0219 10:41:35.134125 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f9579646-1f18-4b3b-af75-9c5af0107c79-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-m9spn\" (UID: \"f9579646-1f18-4b3b-af75-9c5af0107c79\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-m9spn" Feb 19 10:41:35 crc kubenswrapper[4811]: I0219 10:41:35.134168 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtdcz\" (UniqueName: \"kubernetes.io/projected/f9579646-1f18-4b3b-af75-9c5af0107c79-kube-api-access-xtdcz\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-m9spn\" (UID: \"f9579646-1f18-4b3b-af75-9c5af0107c79\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-m9spn" Feb 19 10:41:35 crc kubenswrapper[4811]: I0219 10:41:35.134267 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/f9579646-1f18-4b3b-af75-9c5af0107c79-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-m9spn\" (UID: \"f9579646-1f18-4b3b-af75-9c5af0107c79\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-m9spn" Feb 19 10:41:35 crc kubenswrapper[4811]: I0219 10:41:35.139253 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f9579646-1f18-4b3b-af75-9c5af0107c79-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-m9spn\" (UID: \"f9579646-1f18-4b3b-af75-9c5af0107c79\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-m9spn" Feb 19 10:41:35 crc kubenswrapper[4811]: I0219 10:41:35.141002 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9579646-1f18-4b3b-af75-9c5af0107c79-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-m9spn\" (UID: \"f9579646-1f18-4b3b-af75-9c5af0107c79\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-m9spn" Feb 19 10:41:35 crc kubenswrapper[4811]: I0219 10:41:35.144082 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f9579646-1f18-4b3b-af75-9c5af0107c79-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-m9spn\" (UID: \"f9579646-1f18-4b3b-af75-9c5af0107c79\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-m9spn" Feb 19 10:41:35 crc kubenswrapper[4811]: I0219 10:41:35.155289 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtdcz\" (UniqueName: \"kubernetes.io/projected/f9579646-1f18-4b3b-af75-9c5af0107c79-kube-api-access-xtdcz\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-m9spn\" (UID: \"f9579646-1f18-4b3b-af75-9c5af0107c79\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-m9spn" Feb 19 10:41:35 crc kubenswrapper[4811]: I0219 10:41:35.297853 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-m9spn" Feb 19 10:41:35 crc kubenswrapper[4811]: W0219 10:41:35.896182 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9579646_1f18_4b3b_af75_9c5af0107c79.slice/crio-2d15a8aee53525ce51507cef453699e5cc10e4efd2c5859caf19b30440455f2f WatchSource:0}: Error finding container 2d15a8aee53525ce51507cef453699e5cc10e4efd2c5859caf19b30440455f2f: Status 404 returned error can't find the container with id 2d15a8aee53525ce51507cef453699e5cc10e4efd2c5859caf19b30440455f2f Feb 19 10:41:35 crc kubenswrapper[4811]: I0219 10:41:35.897541 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-m9spn"] Feb 19 10:41:36 crc kubenswrapper[4811]: I0219 10:41:36.792664 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-m9spn" event={"ID":"f9579646-1f18-4b3b-af75-9c5af0107c79","Type":"ContainerStarted","Data":"f01205ed246f199ceac2295652e5838c893f6d3fdc8bb9cff9f0ca56bfe3459c"} Feb 19 10:41:36 crc kubenswrapper[4811]: I0219 10:41:36.793005 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-m9spn" event={"ID":"f9579646-1f18-4b3b-af75-9c5af0107c79","Type":"ContainerStarted","Data":"2d15a8aee53525ce51507cef453699e5cc10e4efd2c5859caf19b30440455f2f"} Feb 19 10:41:36 crc kubenswrapper[4811]: I0219 10:41:36.810809 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-m9spn" podStartSLOduration=2.266050834 podStartE2EDuration="2.810788586s" podCreationTimestamp="2026-02-19 10:41:34 +0000 UTC" firstStartedPulling="2026-02-19 10:41:35.898159773 +0000 UTC m=+1984.424624459" lastFinishedPulling="2026-02-19 10:41:36.442897525 +0000 UTC m=+1984.969362211" observedRunningTime="2026-02-19 10:41:36.807254736 +0000 UTC m=+1985.333719422" watchObservedRunningTime="2026-02-19 10:41:36.810788586 +0000 UTC m=+1985.337253262" Feb 19 10:41:37 crc kubenswrapper[4811]: I0219 10:41:37.583898 4811 scope.go:117] "RemoveContainer" containerID="3963827d2c3359bad8b876f1ddce43e835b48fcca50fea9ad20004f5e029bba9" Feb 19 10:41:38 crc kubenswrapper[4811]: I0219 10:41:38.815998 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" event={"ID":"e2e42c80-bece-4066-abad-05bc661d33f5","Type":"ContainerStarted","Data":"d7201f4dbe99dc72ed85368aab06db50b1d5d39c74f4d3f0e181888c8cdf80c5"} Feb 19 10:42:37 crc kubenswrapper[4811]: I0219 10:42:37.380828 4811 generic.go:334] "Generic (PLEG): container finished" podID="f9579646-1f18-4b3b-af75-9c5af0107c79" containerID="f01205ed246f199ceac2295652e5838c893f6d3fdc8bb9cff9f0ca56bfe3459c" exitCode=0 Feb 19 10:42:37 crc kubenswrapper[4811]: I0219 10:42:37.380913 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-m9spn" event={"ID":"f9579646-1f18-4b3b-af75-9c5af0107c79","Type":"ContainerDied","Data":"f01205ed246f199ceac2295652e5838c893f6d3fdc8bb9cff9f0ca56bfe3459c"} Feb 19 10:42:38 crc kubenswrapper[4811]: I0219 10:42:38.789286 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-m9spn" Feb 19 10:42:38 crc kubenswrapper[4811]: I0219 10:42:38.976647 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtdcz\" (UniqueName: \"kubernetes.io/projected/f9579646-1f18-4b3b-af75-9c5af0107c79-kube-api-access-xtdcz\") pod \"f9579646-1f18-4b3b-af75-9c5af0107c79\" (UID: \"f9579646-1f18-4b3b-af75-9c5af0107c79\") " Feb 19 10:42:38 crc kubenswrapper[4811]: I0219 10:42:38.976734 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9579646-1f18-4b3b-af75-9c5af0107c79-ovn-combined-ca-bundle\") pod \"f9579646-1f18-4b3b-af75-9c5af0107c79\" (UID: \"f9579646-1f18-4b3b-af75-9c5af0107c79\") " Feb 19 10:42:38 crc kubenswrapper[4811]: I0219 10:42:38.976768 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/f9579646-1f18-4b3b-af75-9c5af0107c79-ovncontroller-config-0\") pod \"f9579646-1f18-4b3b-af75-9c5af0107c79\" (UID: \"f9579646-1f18-4b3b-af75-9c5af0107c79\") " Feb 19 10:42:38 crc kubenswrapper[4811]: I0219 10:42:38.976803 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f9579646-1f18-4b3b-af75-9c5af0107c79-inventory\") pod \"f9579646-1f18-4b3b-af75-9c5af0107c79\" (UID: \"f9579646-1f18-4b3b-af75-9c5af0107c79\") " Feb 19 10:42:38 crc kubenswrapper[4811]: I0219 10:42:38.977049 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f9579646-1f18-4b3b-af75-9c5af0107c79-ssh-key-openstack-edpm-ipam\") pod \"f9579646-1f18-4b3b-af75-9c5af0107c79\" (UID: \"f9579646-1f18-4b3b-af75-9c5af0107c79\") " Feb 19 10:42:38 crc kubenswrapper[4811]: I0219 10:42:38.984353 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9579646-1f18-4b3b-af75-9c5af0107c79-kube-api-access-xtdcz" (OuterVolumeSpecName: "kube-api-access-xtdcz") pod "f9579646-1f18-4b3b-af75-9c5af0107c79" (UID: "f9579646-1f18-4b3b-af75-9c5af0107c79"). InnerVolumeSpecName "kube-api-access-xtdcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:42:38 crc kubenswrapper[4811]: I0219 10:42:38.985637 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9579646-1f18-4b3b-af75-9c5af0107c79-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "f9579646-1f18-4b3b-af75-9c5af0107c79" (UID: "f9579646-1f18-4b3b-af75-9c5af0107c79"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.008001 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9579646-1f18-4b3b-af75-9c5af0107c79-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f9579646-1f18-4b3b-af75-9c5af0107c79" (UID: "f9579646-1f18-4b3b-af75-9c5af0107c79"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.008210 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9579646-1f18-4b3b-af75-9c5af0107c79-inventory" (OuterVolumeSpecName: "inventory") pod "f9579646-1f18-4b3b-af75-9c5af0107c79" (UID: "f9579646-1f18-4b3b-af75-9c5af0107c79"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.012176 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9579646-1f18-4b3b-af75-9c5af0107c79-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "f9579646-1f18-4b3b-af75-9c5af0107c79" (UID: "f9579646-1f18-4b3b-af75-9c5af0107c79"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.080560 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtdcz\" (UniqueName: \"kubernetes.io/projected/f9579646-1f18-4b3b-af75-9c5af0107c79-kube-api-access-xtdcz\") on node \"crc\" DevicePath \"\"" Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.080664 4811 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9579646-1f18-4b3b-af75-9c5af0107c79-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.080677 4811 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/f9579646-1f18-4b3b-af75-9c5af0107c79-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.080688 4811 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f9579646-1f18-4b3b-af75-9c5af0107c79-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.080697 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f9579646-1f18-4b3b-af75-9c5af0107c79-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.404592 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-m9spn" event={"ID":"f9579646-1f18-4b3b-af75-9c5af0107c79","Type":"ContainerDied","Data":"2d15a8aee53525ce51507cef453699e5cc10e4efd2c5859caf19b30440455f2f"} Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.404667 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d15a8aee53525ce51507cef453699e5cc10e4efd2c5859caf19b30440455f2f" Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.404677 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-m9spn" Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.517818 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz"] Feb 19 10:42:39 crc kubenswrapper[4811]: E0219 10:42:39.518536 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9579646-1f18-4b3b-af75-9c5af0107c79" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.518557 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9579646-1f18-4b3b-af75-9c5af0107c79" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.518817 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9579646-1f18-4b3b-af75-9c5af0107c79" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.519929 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz" Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.523172 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.523486 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.523631 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.524399 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.525080 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.525259 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wtknk" Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.532182 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz"] Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.593802 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a207452e-4fab-4a38-be85-31bc84e8d6fc-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz\" (UID: \"a207452e-4fab-4a38-be85-31bc84e8d6fc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz" Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.593908 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg2br\" (UniqueName: \"kubernetes.io/projected/a207452e-4fab-4a38-be85-31bc84e8d6fc-kube-api-access-kg2br\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz\" (UID: \"a207452e-4fab-4a38-be85-31bc84e8d6fc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz" Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.593930 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a207452e-4fab-4a38-be85-31bc84e8d6fc-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz\" (UID: \"a207452e-4fab-4a38-be85-31bc84e8d6fc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz" Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.593947 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a207452e-4fab-4a38-be85-31bc84e8d6fc-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz\" (UID: \"a207452e-4fab-4a38-be85-31bc84e8d6fc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz" Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.593976 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a207452e-4fab-4a38-be85-31bc84e8d6fc-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz\" (UID: \"a207452e-4fab-4a38-be85-31bc84e8d6fc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz" Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.594001 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a207452e-4fab-4a38-be85-31bc84e8d6fc-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz\" (UID: \"a207452e-4fab-4a38-be85-31bc84e8d6fc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz" Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.695964 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a207452e-4fab-4a38-be85-31bc84e8d6fc-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz\" (UID: \"a207452e-4fab-4a38-be85-31bc84e8d6fc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz" Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.696127 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg2br\" (UniqueName: \"kubernetes.io/projected/a207452e-4fab-4a38-be85-31bc84e8d6fc-kube-api-access-kg2br\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz\" (UID: \"a207452e-4fab-4a38-be85-31bc84e8d6fc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz" Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.696167 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a207452e-4fab-4a38-be85-31bc84e8d6fc-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz\" (UID: \"a207452e-4fab-4a38-be85-31bc84e8d6fc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz" Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.696208 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a207452e-4fab-4a38-be85-31bc84e8d6fc-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz\" (UID: \"a207452e-4fab-4a38-be85-31bc84e8d6fc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz" Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.696242 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a207452e-4fab-4a38-be85-31bc84e8d6fc-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz\" (UID: \"a207452e-4fab-4a38-be85-31bc84e8d6fc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz" Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.696267 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a207452e-4fab-4a38-be85-31bc84e8d6fc-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz\" (UID: \"a207452e-4fab-4a38-be85-31bc84e8d6fc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz" Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.702600 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a207452e-4fab-4a38-be85-31bc84e8d6fc-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz\" (UID: \"a207452e-4fab-4a38-be85-31bc84e8d6fc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz" Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.702675 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a207452e-4fab-4a38-be85-31bc84e8d6fc-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz\" (UID: \"a207452e-4fab-4a38-be85-31bc84e8d6fc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz" Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.705214 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a207452e-4fab-4a38-be85-31bc84e8d6fc-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz\" (UID: \"a207452e-4fab-4a38-be85-31bc84e8d6fc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz" Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.711810 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a207452e-4fab-4a38-be85-31bc84e8d6fc-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz\" (UID: \"a207452e-4fab-4a38-be85-31bc84e8d6fc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz" Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.711899 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a207452e-4fab-4a38-be85-31bc84e8d6fc-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz\" (UID: \"a207452e-4fab-4a38-be85-31bc84e8d6fc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz" Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.716152 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg2br\" (UniqueName: \"kubernetes.io/projected/a207452e-4fab-4a38-be85-31bc84e8d6fc-kube-api-access-kg2br\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz\" (UID: \"a207452e-4fab-4a38-be85-31bc84e8d6fc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz" Feb 19 10:42:39 crc kubenswrapper[4811]: I0219 10:42:39.882098 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz" Feb 19 10:42:40 crc kubenswrapper[4811]: I0219 10:42:40.442107 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz"] Feb 19 10:42:41 crc kubenswrapper[4811]: I0219 10:42:41.432364 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz" event={"ID":"a207452e-4fab-4a38-be85-31bc84e8d6fc","Type":"ContainerStarted","Data":"ccdb5cf2ecf0f6fdedcd8be2872f39b5dbefcddeef374b60e780bdd0a56ad9ca"} Feb 19 10:42:41 crc kubenswrapper[4811]: I0219 10:42:41.432936 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz" event={"ID":"a207452e-4fab-4a38-be85-31bc84e8d6fc","Type":"ContainerStarted","Data":"dafe4ed92e5652702f18a542527d01423afb681a9d65a92bef8c5e716a1e9b1f"} Feb 19 10:42:41 crc kubenswrapper[4811]: I0219 10:42:41.453808 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz" podStartSLOduration=1.9534500000000001 podStartE2EDuration="2.453788212s" podCreationTimestamp="2026-02-19 10:42:39 +0000 UTC" firstStartedPulling="2026-02-19 10:42:40.456422223 +0000 UTC m=+2048.982886909" lastFinishedPulling="2026-02-19 10:42:40.956760435 +0000 UTC m=+2049.483225121" observedRunningTime="2026-02-19 10:42:41.453366032 +0000 UTC m=+2049.979830728" watchObservedRunningTime="2026-02-19 10:42:41.453788212 +0000 UTC m=+2049.980252898" Feb 19 10:43:27 crc kubenswrapper[4811]: I0219 10:43:27.955900 4811 generic.go:334] "Generic (PLEG): container finished" podID="a207452e-4fab-4a38-be85-31bc84e8d6fc" containerID="ccdb5cf2ecf0f6fdedcd8be2872f39b5dbefcddeef374b60e780bdd0a56ad9ca" exitCode=0 Feb 19 10:43:27 crc kubenswrapper[4811]: I0219 10:43:27.956005 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz" event={"ID":"a207452e-4fab-4a38-be85-31bc84e8d6fc","Type":"ContainerDied","Data":"ccdb5cf2ecf0f6fdedcd8be2872f39b5dbefcddeef374b60e780bdd0a56ad9ca"} Feb 19 10:43:29 crc kubenswrapper[4811]: I0219 10:43:29.457097 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz" Feb 19 10:43:29 crc kubenswrapper[4811]: I0219 10:43:29.498717 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kg2br\" (UniqueName: \"kubernetes.io/projected/a207452e-4fab-4a38-be85-31bc84e8d6fc-kube-api-access-kg2br\") pod \"a207452e-4fab-4a38-be85-31bc84e8d6fc\" (UID: \"a207452e-4fab-4a38-be85-31bc84e8d6fc\") " Feb 19 10:43:29 crc kubenswrapper[4811]: I0219 10:43:29.498838 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a207452e-4fab-4a38-be85-31bc84e8d6fc-neutron-metadata-combined-ca-bundle\") pod \"a207452e-4fab-4a38-be85-31bc84e8d6fc\" (UID: \"a207452e-4fab-4a38-be85-31bc84e8d6fc\") " Feb 19 10:43:29 crc kubenswrapper[4811]: I0219 10:43:29.498899 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a207452e-4fab-4a38-be85-31bc84e8d6fc-ssh-key-openstack-edpm-ipam\") pod \"a207452e-4fab-4a38-be85-31bc84e8d6fc\" (UID: \"a207452e-4fab-4a38-be85-31bc84e8d6fc\") " Feb 19 10:43:29 crc kubenswrapper[4811]: I0219 10:43:29.498969 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a207452e-4fab-4a38-be85-31bc84e8d6fc-nova-metadata-neutron-config-0\") pod \"a207452e-4fab-4a38-be85-31bc84e8d6fc\" (UID: \"a207452e-4fab-4a38-be85-31bc84e8d6fc\") " Feb 19 10:43:29 crc kubenswrapper[4811]: I0219 10:43:29.499371 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a207452e-4fab-4a38-be85-31bc84e8d6fc-inventory\") pod \"a207452e-4fab-4a38-be85-31bc84e8d6fc\" (UID: \"a207452e-4fab-4a38-be85-31bc84e8d6fc\") " Feb 19 10:43:29 crc kubenswrapper[4811]: I0219 10:43:29.499487 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a207452e-4fab-4a38-be85-31bc84e8d6fc-neutron-ovn-metadata-agent-neutron-config-0\") pod \"a207452e-4fab-4a38-be85-31bc84e8d6fc\" (UID: \"a207452e-4fab-4a38-be85-31bc84e8d6fc\") " Feb 19 10:43:29 crc kubenswrapper[4811]: I0219 10:43:29.508563 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a207452e-4fab-4a38-be85-31bc84e8d6fc-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "a207452e-4fab-4a38-be85-31bc84e8d6fc" (UID: "a207452e-4fab-4a38-be85-31bc84e8d6fc"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:43:29 crc kubenswrapper[4811]: I0219 10:43:29.508762 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a207452e-4fab-4a38-be85-31bc84e8d6fc-kube-api-access-kg2br" (OuterVolumeSpecName: "kube-api-access-kg2br") pod "a207452e-4fab-4a38-be85-31bc84e8d6fc" (UID: "a207452e-4fab-4a38-be85-31bc84e8d6fc"). InnerVolumeSpecName "kube-api-access-kg2br". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:43:29 crc kubenswrapper[4811]: I0219 10:43:29.534863 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a207452e-4fab-4a38-be85-31bc84e8d6fc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a207452e-4fab-4a38-be85-31bc84e8d6fc" (UID: "a207452e-4fab-4a38-be85-31bc84e8d6fc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:43:29 crc kubenswrapper[4811]: I0219 10:43:29.538333 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a207452e-4fab-4a38-be85-31bc84e8d6fc-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "a207452e-4fab-4a38-be85-31bc84e8d6fc" (UID: "a207452e-4fab-4a38-be85-31bc84e8d6fc"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:43:29 crc kubenswrapper[4811]: I0219 10:43:29.550254 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a207452e-4fab-4a38-be85-31bc84e8d6fc-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "a207452e-4fab-4a38-be85-31bc84e8d6fc" (UID: "a207452e-4fab-4a38-be85-31bc84e8d6fc"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:43:29 crc kubenswrapper[4811]: I0219 10:43:29.559050 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a207452e-4fab-4a38-be85-31bc84e8d6fc-inventory" (OuterVolumeSpecName: "inventory") pod "a207452e-4fab-4a38-be85-31bc84e8d6fc" (UID: "a207452e-4fab-4a38-be85-31bc84e8d6fc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:43:29 crc kubenswrapper[4811]: I0219 10:43:29.605361 4811 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a207452e-4fab-4a38-be85-31bc84e8d6fc-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:43:29 crc kubenswrapper[4811]: I0219 10:43:29.605413 4811 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a207452e-4fab-4a38-be85-31bc84e8d6fc-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:43:29 crc kubenswrapper[4811]: I0219 10:43:29.605429 4811 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/a207452e-4fab-4a38-be85-31bc84e8d6fc-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:43:29 crc kubenswrapper[4811]: I0219 10:43:29.605450 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kg2br\" (UniqueName: \"kubernetes.io/projected/a207452e-4fab-4a38-be85-31bc84e8d6fc-kube-api-access-kg2br\") on node \"crc\" DevicePath \"\"" Feb 19 10:43:29 crc kubenswrapper[4811]: I0219 10:43:29.605480 4811 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a207452e-4fab-4a38-be85-31bc84e8d6fc-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:43:29 crc kubenswrapper[4811]: I0219 10:43:29.605494 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a207452e-4fab-4a38-be85-31bc84e8d6fc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:43:29 crc kubenswrapper[4811]: I0219 10:43:29.985098 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz" event={"ID":"a207452e-4fab-4a38-be85-31bc84e8d6fc","Type":"ContainerDied","Data":"dafe4ed92e5652702f18a542527d01423afb681a9d65a92bef8c5e716a1e9b1f"} Feb 19 10:43:29 crc kubenswrapper[4811]: I0219 10:43:29.985171 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dafe4ed92e5652702f18a542527d01423afb681a9d65a92bef8c5e716a1e9b1f" Feb 19 10:43:29 crc kubenswrapper[4811]: I0219 10:43:29.985203 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz" Feb 19 10:43:30 crc kubenswrapper[4811]: I0219 10:43:30.116001 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-flfd2"] Feb 19 10:43:30 crc kubenswrapper[4811]: E0219 10:43:30.116642 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a207452e-4fab-4a38-be85-31bc84e8d6fc" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 19 10:43:30 crc kubenswrapper[4811]: I0219 10:43:30.116678 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="a207452e-4fab-4a38-be85-31bc84e8d6fc" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 19 10:43:30 crc kubenswrapper[4811]: I0219 10:43:30.116986 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="a207452e-4fab-4a38-be85-31bc84e8d6fc" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 19 10:43:30 crc kubenswrapper[4811]: I0219 10:43:30.118080 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-flfd2" Feb 19 10:43:30 crc kubenswrapper[4811]: I0219 10:43:30.122297 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:43:30 crc kubenswrapper[4811]: I0219 10:43:30.124005 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 19 10:43:30 crc kubenswrapper[4811]: I0219 10:43:30.124331 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:43:30 crc kubenswrapper[4811]: I0219 10:43:30.124509 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wtknk" Feb 19 10:43:30 crc kubenswrapper[4811]: I0219 10:43:30.124420 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:43:30 crc kubenswrapper[4811]: I0219 10:43:30.139033 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-flfd2"] Feb 19 10:43:30 crc kubenswrapper[4811]: I0219 10:43:30.223044 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/177810e6-4482-4c82-ac9a-721e76c63579-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-flfd2\" (UID: \"177810e6-4482-4c82-ac9a-721e76c63579\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-flfd2" Feb 19 10:43:30 crc kubenswrapper[4811]: I0219 10:43:30.223256 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/177810e6-4482-4c82-ac9a-721e76c63579-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-flfd2\" (UID: \"177810e6-4482-4c82-ac9a-721e76c63579\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-flfd2" Feb 19 10:43:30 crc kubenswrapper[4811]: I0219 10:43:30.223365 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/177810e6-4482-4c82-ac9a-721e76c63579-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-flfd2\" (UID: \"177810e6-4482-4c82-ac9a-721e76c63579\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-flfd2" Feb 19 10:43:30 crc kubenswrapper[4811]: I0219 10:43:30.223409 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjxnl\" (UniqueName: \"kubernetes.io/projected/177810e6-4482-4c82-ac9a-721e76c63579-kube-api-access-cjxnl\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-flfd2\" (UID: \"177810e6-4482-4c82-ac9a-721e76c63579\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-flfd2" Feb 19 10:43:30 crc kubenswrapper[4811]: I0219 10:43:30.223489 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/177810e6-4482-4c82-ac9a-721e76c63579-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-flfd2\" (UID: \"177810e6-4482-4c82-ac9a-721e76c63579\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-flfd2" Feb 19 10:43:30 crc kubenswrapper[4811]: I0219 10:43:30.325628 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/177810e6-4482-4c82-ac9a-721e76c63579-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-flfd2\" (UID: \"177810e6-4482-4c82-ac9a-721e76c63579\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-flfd2" Feb 19 10:43:30 crc kubenswrapper[4811]: I0219 10:43:30.325796 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/177810e6-4482-4c82-ac9a-721e76c63579-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-flfd2\" (UID: \"177810e6-4482-4c82-ac9a-721e76c63579\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-flfd2" Feb 19 10:43:30 crc kubenswrapper[4811]: I0219 10:43:30.325873 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/177810e6-4482-4c82-ac9a-721e76c63579-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-flfd2\" (UID: \"177810e6-4482-4c82-ac9a-721e76c63579\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-flfd2" Feb 19 10:43:30 crc kubenswrapper[4811]: I0219 10:43:30.325906 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjxnl\" (UniqueName: \"kubernetes.io/projected/177810e6-4482-4c82-ac9a-721e76c63579-kube-api-access-cjxnl\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-flfd2\" (UID: \"177810e6-4482-4c82-ac9a-721e76c63579\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-flfd2" Feb 19 10:43:30 crc kubenswrapper[4811]: I0219 10:43:30.325942 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/177810e6-4482-4c82-ac9a-721e76c63579-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-flfd2\" (UID: \"177810e6-4482-4c82-ac9a-721e76c63579\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-flfd2" Feb 19 10:43:30 crc kubenswrapper[4811]: I0219 10:43:30.334525 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/177810e6-4482-4c82-ac9a-721e76c63579-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-flfd2\" (UID: \"177810e6-4482-4c82-ac9a-721e76c63579\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-flfd2" Feb 19 10:43:30 crc kubenswrapper[4811]: I0219 10:43:30.336330 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/177810e6-4482-4c82-ac9a-721e76c63579-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-flfd2\" (UID: \"177810e6-4482-4c82-ac9a-721e76c63579\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-flfd2" Feb 19 10:43:30 crc kubenswrapper[4811]: I0219 10:43:30.346429 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/177810e6-4482-4c82-ac9a-721e76c63579-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-flfd2\" (UID: \"177810e6-4482-4c82-ac9a-721e76c63579\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-flfd2" Feb 19 10:43:30 crc kubenswrapper[4811]: I0219 10:43:30.364414 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/177810e6-4482-4c82-ac9a-721e76c63579-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-flfd2\" (UID: \"177810e6-4482-4c82-ac9a-721e76c63579\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-flfd2" Feb 19 10:43:30 crc kubenswrapper[4811]: I0219 10:43:30.374721 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjxnl\" (UniqueName: \"kubernetes.io/projected/177810e6-4482-4c82-ac9a-721e76c63579-kube-api-access-cjxnl\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-flfd2\" (UID: \"177810e6-4482-4c82-ac9a-721e76c63579\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-flfd2" Feb 19 10:43:30 crc kubenswrapper[4811]: I0219 10:43:30.444999 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-flfd2" Feb 19 10:43:31 crc kubenswrapper[4811]: I0219 10:43:31.148719 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-flfd2"] Feb 19 10:43:32 crc kubenswrapper[4811]: I0219 10:43:32.043128 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-flfd2" event={"ID":"177810e6-4482-4c82-ac9a-721e76c63579","Type":"ContainerStarted","Data":"6ca044e4729c2efab085dc27b9a83780ec020a74201dd3e93fa4a9c458120648"} Feb 19 10:43:33 crc kubenswrapper[4811]: I0219 10:43:33.059429 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-flfd2" event={"ID":"177810e6-4482-4c82-ac9a-721e76c63579","Type":"ContainerStarted","Data":"bccbb5a4cc57770fa9050adaf671fcab3a2690351fb27b36e853e78c7b785d99"} Feb 19 10:43:33 crc kubenswrapper[4811]: I0219 10:43:33.082477 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-flfd2" podStartSLOduration=2.334122387 podStartE2EDuration="3.082429608s" podCreationTimestamp="2026-02-19 10:43:30 +0000 UTC" firstStartedPulling="2026-02-19 10:43:31.164045183 +0000 UTC m=+2099.690509869" lastFinishedPulling="2026-02-19 10:43:31.912352404 +0000 UTC m=+2100.438817090" observedRunningTime="2026-02-19 10:43:33.079029002 +0000 UTC m=+2101.605493688" watchObservedRunningTime="2026-02-19 10:43:33.082429608 +0000 UTC m=+2101.608894284" Feb 19 10:43:56 crc kubenswrapper[4811]: I0219 10:43:56.787928 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:43:56 crc kubenswrapper[4811]: I0219 10:43:56.788973 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:44:26 crc kubenswrapper[4811]: I0219 10:44:26.789370 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:44:26 crc kubenswrapper[4811]: I0219 10:44:26.790326 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:44:56 crc kubenswrapper[4811]: I0219 10:44:56.788763 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:44:56 crc kubenswrapper[4811]: I0219 10:44:56.789736 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:44:56 crc kubenswrapper[4811]: I0219 10:44:56.789822 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" Feb 19 10:44:56 crc kubenswrapper[4811]: I0219 10:44:56.791085 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d7201f4dbe99dc72ed85368aab06db50b1d5d39c74f4d3f0e181888c8cdf80c5"} pod="openshift-machine-config-operator/machine-config-daemon-v97zm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:44:56 crc kubenswrapper[4811]: I0219 10:44:56.791174 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" containerID="cri-o://d7201f4dbe99dc72ed85368aab06db50b1d5d39c74f4d3f0e181888c8cdf80c5" gracePeriod=600 Feb 19 10:44:57 crc kubenswrapper[4811]: I0219 10:44:57.002343 4811 generic.go:334] "Generic (PLEG): container finished" podID="e2e42c80-bece-4066-abad-05bc661d33f5" containerID="d7201f4dbe99dc72ed85368aab06db50b1d5d39c74f4d3f0e181888c8cdf80c5" exitCode=0 Feb 19 10:44:57 crc kubenswrapper[4811]: I0219 10:44:57.002467 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" event={"ID":"e2e42c80-bece-4066-abad-05bc661d33f5","Type":"ContainerDied","Data":"d7201f4dbe99dc72ed85368aab06db50b1d5d39c74f4d3f0e181888c8cdf80c5"} Feb 19 10:44:57 crc kubenswrapper[4811]: I0219 10:44:57.002890 4811 scope.go:117] "RemoveContainer" containerID="3963827d2c3359bad8b876f1ddce43e835b48fcca50fea9ad20004f5e029bba9" Feb 19 10:44:58 crc kubenswrapper[4811]: I0219 10:44:58.017918 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" event={"ID":"e2e42c80-bece-4066-abad-05bc661d33f5","Type":"ContainerStarted","Data":"d39fec6eee1a4776c6f310ef7d9dd213a092ee3da120aad38ee64de4b6939d2d"} Feb 19 10:45:00 crc kubenswrapper[4811]: I0219 10:45:00.161761 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524965-82pxl"] Feb 19 10:45:00 crc kubenswrapper[4811]: I0219 10:45:00.166104 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-82pxl" Feb 19 10:45:00 crc kubenswrapper[4811]: I0219 10:45:00.169034 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 10:45:00 crc kubenswrapper[4811]: I0219 10:45:00.172260 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 10:45:00 crc kubenswrapper[4811]: I0219 10:45:00.186475 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524965-82pxl"] Feb 19 10:45:00 crc kubenswrapper[4811]: I0219 10:45:00.302084 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94kmt\" (UniqueName: \"kubernetes.io/projected/7892c5b3-42eb-4249-be50-0f9bbcfe7090-kube-api-access-94kmt\") pod \"collect-profiles-29524965-82pxl\" (UID: \"7892c5b3-42eb-4249-be50-0f9bbcfe7090\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-82pxl" Feb 19 10:45:00 crc kubenswrapper[4811]: I0219 10:45:00.302329 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7892c5b3-42eb-4249-be50-0f9bbcfe7090-config-volume\") pod \"collect-profiles-29524965-82pxl\" (UID: \"7892c5b3-42eb-4249-be50-0f9bbcfe7090\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-82pxl" Feb 19 10:45:00 crc kubenswrapper[4811]: I0219 10:45:00.302370 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7892c5b3-42eb-4249-be50-0f9bbcfe7090-secret-volume\") pod \"collect-profiles-29524965-82pxl\" (UID: \"7892c5b3-42eb-4249-be50-0f9bbcfe7090\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-82pxl" Feb 19 10:45:00 crc kubenswrapper[4811]: I0219 10:45:00.405636 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7892c5b3-42eb-4249-be50-0f9bbcfe7090-secret-volume\") pod \"collect-profiles-29524965-82pxl\" (UID: \"7892c5b3-42eb-4249-be50-0f9bbcfe7090\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-82pxl" Feb 19 10:45:00 crc kubenswrapper[4811]: I0219 10:45:00.405913 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94kmt\" (UniqueName: \"kubernetes.io/projected/7892c5b3-42eb-4249-be50-0f9bbcfe7090-kube-api-access-94kmt\") pod \"collect-profiles-29524965-82pxl\" (UID: \"7892c5b3-42eb-4249-be50-0f9bbcfe7090\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-82pxl" Feb 19 10:45:00 crc kubenswrapper[4811]: I0219 10:45:00.406001 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7892c5b3-42eb-4249-be50-0f9bbcfe7090-config-volume\") pod \"collect-profiles-29524965-82pxl\" (UID: \"7892c5b3-42eb-4249-be50-0f9bbcfe7090\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-82pxl" Feb 19 10:45:00 crc kubenswrapper[4811]: I0219 10:45:00.407269 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7892c5b3-42eb-4249-be50-0f9bbcfe7090-config-volume\") pod \"collect-profiles-29524965-82pxl\" (UID: \"7892c5b3-42eb-4249-be50-0f9bbcfe7090\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-82pxl" Feb 19 10:45:00 crc kubenswrapper[4811]: I0219 10:45:00.426964 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7892c5b3-42eb-4249-be50-0f9bbcfe7090-secret-volume\") pod \"collect-profiles-29524965-82pxl\" (UID: \"7892c5b3-42eb-4249-be50-0f9bbcfe7090\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-82pxl" Feb 19 10:45:00 crc kubenswrapper[4811]: I0219 10:45:00.427788 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94kmt\" (UniqueName: \"kubernetes.io/projected/7892c5b3-42eb-4249-be50-0f9bbcfe7090-kube-api-access-94kmt\") pod \"collect-profiles-29524965-82pxl\" (UID: \"7892c5b3-42eb-4249-be50-0f9bbcfe7090\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-82pxl" Feb 19 10:45:00 crc kubenswrapper[4811]: I0219 10:45:00.518407 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-82pxl" Feb 19 10:45:01 crc kubenswrapper[4811]: I0219 10:45:01.055092 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524965-82pxl"] Feb 19 10:45:01 crc kubenswrapper[4811]: W0219 10:45:01.056977 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7892c5b3_42eb_4249_be50_0f9bbcfe7090.slice/crio-907072e1f2a063b12bc6ac224a999b6e2de902f8662f2506a8d4d976fd6775ba WatchSource:0}: Error finding container 907072e1f2a063b12bc6ac224a999b6e2de902f8662f2506a8d4d976fd6775ba: Status 404 returned error can't find the container with id 907072e1f2a063b12bc6ac224a999b6e2de902f8662f2506a8d4d976fd6775ba Feb 19 10:45:02 crc kubenswrapper[4811]: I0219 10:45:02.063812 4811 generic.go:334] "Generic (PLEG): container finished" podID="7892c5b3-42eb-4249-be50-0f9bbcfe7090" containerID="37fc067143ab2d0b18850643021e336c56e5b3afe2cc640ba630132a33ae6ded" exitCode=0 Feb 19 10:45:02 crc kubenswrapper[4811]: I0219 10:45:02.063867 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-82pxl" event={"ID":"7892c5b3-42eb-4249-be50-0f9bbcfe7090","Type":"ContainerDied","Data":"37fc067143ab2d0b18850643021e336c56e5b3afe2cc640ba630132a33ae6ded"} Feb 19 10:45:02 crc kubenswrapper[4811]: I0219 10:45:02.064678 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-82pxl" event={"ID":"7892c5b3-42eb-4249-be50-0f9bbcfe7090","Type":"ContainerStarted","Data":"907072e1f2a063b12bc6ac224a999b6e2de902f8662f2506a8d4d976fd6775ba"} Feb 19 10:45:03 crc kubenswrapper[4811]: I0219 10:45:03.509385 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-82pxl" Feb 19 10:45:03 crc kubenswrapper[4811]: I0219 10:45:03.690191 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94kmt\" (UniqueName: \"kubernetes.io/projected/7892c5b3-42eb-4249-be50-0f9bbcfe7090-kube-api-access-94kmt\") pod \"7892c5b3-42eb-4249-be50-0f9bbcfe7090\" (UID: \"7892c5b3-42eb-4249-be50-0f9bbcfe7090\") " Feb 19 10:45:03 crc kubenswrapper[4811]: I0219 10:45:03.690318 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7892c5b3-42eb-4249-be50-0f9bbcfe7090-secret-volume\") pod \"7892c5b3-42eb-4249-be50-0f9bbcfe7090\" (UID: \"7892c5b3-42eb-4249-be50-0f9bbcfe7090\") " Feb 19 10:45:03 crc kubenswrapper[4811]: I0219 10:45:03.690570 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7892c5b3-42eb-4249-be50-0f9bbcfe7090-config-volume\") pod \"7892c5b3-42eb-4249-be50-0f9bbcfe7090\" (UID: \"7892c5b3-42eb-4249-be50-0f9bbcfe7090\") " Feb 19 10:45:03 crc kubenswrapper[4811]: I0219 10:45:03.692796 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7892c5b3-42eb-4249-be50-0f9bbcfe7090-config-volume" (OuterVolumeSpecName: "config-volume") pod "7892c5b3-42eb-4249-be50-0f9bbcfe7090" (UID: "7892c5b3-42eb-4249-be50-0f9bbcfe7090"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:45:03 crc kubenswrapper[4811]: I0219 10:45:03.700544 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7892c5b3-42eb-4249-be50-0f9bbcfe7090-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7892c5b3-42eb-4249-be50-0f9bbcfe7090" (UID: "7892c5b3-42eb-4249-be50-0f9bbcfe7090"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:45:03 crc kubenswrapper[4811]: I0219 10:45:03.700558 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7892c5b3-42eb-4249-be50-0f9bbcfe7090-kube-api-access-94kmt" (OuterVolumeSpecName: "kube-api-access-94kmt") pod "7892c5b3-42eb-4249-be50-0f9bbcfe7090" (UID: "7892c5b3-42eb-4249-be50-0f9bbcfe7090"). InnerVolumeSpecName "kube-api-access-94kmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:45:03 crc kubenswrapper[4811]: I0219 10:45:03.794008 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94kmt\" (UniqueName: \"kubernetes.io/projected/7892c5b3-42eb-4249-be50-0f9bbcfe7090-kube-api-access-94kmt\") on node \"crc\" DevicePath \"\"" Feb 19 10:45:03 crc kubenswrapper[4811]: I0219 10:45:03.794069 4811 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7892c5b3-42eb-4249-be50-0f9bbcfe7090-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 10:45:03 crc kubenswrapper[4811]: I0219 10:45:03.794082 4811 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7892c5b3-42eb-4249-be50-0f9bbcfe7090-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 10:45:04 crc kubenswrapper[4811]: I0219 10:45:04.094670 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-82pxl" event={"ID":"7892c5b3-42eb-4249-be50-0f9bbcfe7090","Type":"ContainerDied","Data":"907072e1f2a063b12bc6ac224a999b6e2de902f8662f2506a8d4d976fd6775ba"} Feb 19 10:45:04 crc kubenswrapper[4811]: I0219 10:45:04.095013 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="907072e1f2a063b12bc6ac224a999b6e2de902f8662f2506a8d4d976fd6775ba" Feb 19 10:45:04 crc kubenswrapper[4811]: I0219 10:45:04.094792 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524965-82pxl" Feb 19 10:45:04 crc kubenswrapper[4811]: I0219 10:45:04.606116 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524920-kwjxq"] Feb 19 10:45:04 crc kubenswrapper[4811]: I0219 10:45:04.613979 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524920-kwjxq"] Feb 19 10:45:06 crc kubenswrapper[4811]: I0219 10:45:06.595660 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fefb44b-8bb5-44f8-be2b-4f0ada1e3899" path="/var/lib/kubelet/pods/3fefb44b-8bb5-44f8-be2b-4f0ada1e3899/volumes" Feb 19 10:45:50 crc kubenswrapper[4811]: I0219 10:45:50.359674 4811 scope.go:117] "RemoveContainer" containerID="771306bee2591b778b7eea51a42049bfc901933dcfb96abf55a31c0449ee4661" Feb 19 10:46:11 crc kubenswrapper[4811]: I0219 10:46:11.338604 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5s26f"] Feb 19 10:46:11 crc kubenswrapper[4811]: E0219 10:46:11.340885 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7892c5b3-42eb-4249-be50-0f9bbcfe7090" containerName="collect-profiles" Feb 19 10:46:11 crc kubenswrapper[4811]: I0219 10:46:11.340977 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="7892c5b3-42eb-4249-be50-0f9bbcfe7090" containerName="collect-profiles" Feb 19 10:46:11 crc kubenswrapper[4811]: I0219 10:46:11.341250 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="7892c5b3-42eb-4249-be50-0f9bbcfe7090" containerName="collect-profiles" Feb 19 10:46:11 crc kubenswrapper[4811]: I0219 10:46:11.343041 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5s26f" Feb 19 10:46:11 crc kubenswrapper[4811]: I0219 10:46:11.353701 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5s26f"] Feb 19 10:46:11 crc kubenswrapper[4811]: I0219 10:46:11.512530 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4swv\" (UniqueName: \"kubernetes.io/projected/d427c9eb-d93d-414b-bae8-456ff14448e7-kube-api-access-c4swv\") pod \"certified-operators-5s26f\" (UID: \"d427c9eb-d93d-414b-bae8-456ff14448e7\") " pod="openshift-marketplace/certified-operators-5s26f" Feb 19 10:46:11 crc kubenswrapper[4811]: I0219 10:46:11.512625 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d427c9eb-d93d-414b-bae8-456ff14448e7-catalog-content\") pod \"certified-operators-5s26f\" (UID: \"d427c9eb-d93d-414b-bae8-456ff14448e7\") " pod="openshift-marketplace/certified-operators-5s26f" Feb 19 10:46:11 crc kubenswrapper[4811]: I0219 10:46:11.512711 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d427c9eb-d93d-414b-bae8-456ff14448e7-utilities\") pod \"certified-operators-5s26f\" (UID: \"d427c9eb-d93d-414b-bae8-456ff14448e7\") " pod="openshift-marketplace/certified-operators-5s26f" Feb 19 10:46:11 crc kubenswrapper[4811]: I0219 10:46:11.615077 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d427c9eb-d93d-414b-bae8-456ff14448e7-utilities\") pod \"certified-operators-5s26f\" (UID: \"d427c9eb-d93d-414b-bae8-456ff14448e7\") " pod="openshift-marketplace/certified-operators-5s26f" Feb 19 10:46:11 crc kubenswrapper[4811]: I0219 10:46:11.615198 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4swv\" (UniqueName: \"kubernetes.io/projected/d427c9eb-d93d-414b-bae8-456ff14448e7-kube-api-access-c4swv\") pod \"certified-operators-5s26f\" (UID: \"d427c9eb-d93d-414b-bae8-456ff14448e7\") " pod="openshift-marketplace/certified-operators-5s26f" Feb 19 10:46:11 crc kubenswrapper[4811]: I0219 10:46:11.615247 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d427c9eb-d93d-414b-bae8-456ff14448e7-catalog-content\") pod \"certified-operators-5s26f\" (UID: \"d427c9eb-d93d-414b-bae8-456ff14448e7\") " pod="openshift-marketplace/certified-operators-5s26f" Feb 19 10:46:11 crc kubenswrapper[4811]: I0219 10:46:11.615771 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d427c9eb-d93d-414b-bae8-456ff14448e7-catalog-content\") pod \"certified-operators-5s26f\" (UID: \"d427c9eb-d93d-414b-bae8-456ff14448e7\") " pod="openshift-marketplace/certified-operators-5s26f" Feb 19 10:46:11 crc kubenswrapper[4811]: I0219 10:46:11.615794 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d427c9eb-d93d-414b-bae8-456ff14448e7-utilities\") pod \"certified-operators-5s26f\" (UID: \"d427c9eb-d93d-414b-bae8-456ff14448e7\") " pod="openshift-marketplace/certified-operators-5s26f" Feb 19 10:46:11 crc kubenswrapper[4811]: I0219 10:46:11.640924 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4swv\" (UniqueName: \"kubernetes.io/projected/d427c9eb-d93d-414b-bae8-456ff14448e7-kube-api-access-c4swv\") pod \"certified-operators-5s26f\" (UID: \"d427c9eb-d93d-414b-bae8-456ff14448e7\") " pod="openshift-marketplace/certified-operators-5s26f" Feb 19 10:46:11 crc kubenswrapper[4811]: I0219 10:46:11.712846 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5s26f" Feb 19 10:46:12 crc kubenswrapper[4811]: I0219 10:46:12.309951 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5s26f"] Feb 19 10:46:12 crc kubenswrapper[4811]: I0219 10:46:12.817634 4811 generic.go:334] "Generic (PLEG): container finished" podID="d427c9eb-d93d-414b-bae8-456ff14448e7" containerID="0097ff9893421996b4a742e480643e42419dd7053f0cc886d6c9fdca72918438" exitCode=0 Feb 19 10:46:12 crc kubenswrapper[4811]: I0219 10:46:12.817708 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5s26f" event={"ID":"d427c9eb-d93d-414b-bae8-456ff14448e7","Type":"ContainerDied","Data":"0097ff9893421996b4a742e480643e42419dd7053f0cc886d6c9fdca72918438"} Feb 19 10:46:12 crc kubenswrapper[4811]: I0219 10:46:12.818033 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5s26f" event={"ID":"d427c9eb-d93d-414b-bae8-456ff14448e7","Type":"ContainerStarted","Data":"3cf60ddcc56089583fcf049539e0297355c26cff451d1a43db82ca51446844af"} Feb 19 10:46:12 crc kubenswrapper[4811]: I0219 10:46:12.821824 4811 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 10:46:14 crc kubenswrapper[4811]: I0219 10:46:14.836820 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5s26f" event={"ID":"d427c9eb-d93d-414b-bae8-456ff14448e7","Type":"ContainerStarted","Data":"790748e4c2f9eee9e68701c9c72e42703aca0c5de705187d850808cf0cc0dd0a"} Feb 19 10:46:15 crc kubenswrapper[4811]: I0219 10:46:15.846893 4811 generic.go:334] "Generic (PLEG): container finished" podID="d427c9eb-d93d-414b-bae8-456ff14448e7" containerID="790748e4c2f9eee9e68701c9c72e42703aca0c5de705187d850808cf0cc0dd0a" exitCode=0 Feb 19 10:46:15 crc kubenswrapper[4811]: I0219 10:46:15.846996 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5s26f" event={"ID":"d427c9eb-d93d-414b-bae8-456ff14448e7","Type":"ContainerDied","Data":"790748e4c2f9eee9e68701c9c72e42703aca0c5de705187d850808cf0cc0dd0a"} Feb 19 10:46:16 crc kubenswrapper[4811]: I0219 10:46:16.859611 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5s26f" event={"ID":"d427c9eb-d93d-414b-bae8-456ff14448e7","Type":"ContainerStarted","Data":"015f9a5cc7213a2dc0465318beaaa71cf4e57ce5b67fcc50b5f4cebadb0ae04a"} Feb 19 10:46:16 crc kubenswrapper[4811]: I0219 10:46:16.895211 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5s26f" podStartSLOduration=2.450880026 podStartE2EDuration="5.895182388s" podCreationTimestamp="2026-02-19 10:46:11 +0000 UTC" firstStartedPulling="2026-02-19 10:46:12.821280195 +0000 UTC m=+2261.347744881" lastFinishedPulling="2026-02-19 10:46:16.265582557 +0000 UTC m=+2264.792047243" observedRunningTime="2026-02-19 10:46:16.883343007 +0000 UTC m=+2265.409807693" watchObservedRunningTime="2026-02-19 10:46:16.895182388 +0000 UTC m=+2265.421647074" Feb 19 10:46:21 crc kubenswrapper[4811]: I0219 10:46:21.713980 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5s26f" Feb 19 10:46:21 crc kubenswrapper[4811]: I0219 10:46:21.714362 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5s26f" Feb 19 10:46:21 crc kubenswrapper[4811]: I0219 10:46:21.767194 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5s26f" Feb 19 10:46:21 crc kubenswrapper[4811]: I0219 10:46:21.966768 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5s26f" Feb 19 10:46:22 crc kubenswrapper[4811]: I0219 10:46:22.032339 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5s26f"] Feb 19 10:46:23 crc kubenswrapper[4811]: I0219 10:46:23.932902 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5s26f" podUID="d427c9eb-d93d-414b-bae8-456ff14448e7" containerName="registry-server" containerID="cri-o://015f9a5cc7213a2dc0465318beaaa71cf4e57ce5b67fcc50b5f4cebadb0ae04a" gracePeriod=2 Feb 19 10:46:24 crc kubenswrapper[4811]: I0219 10:46:24.522063 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5s26f" Feb 19 10:46:24 crc kubenswrapper[4811]: I0219 10:46:24.632985 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d427c9eb-d93d-414b-bae8-456ff14448e7-catalog-content\") pod \"d427c9eb-d93d-414b-bae8-456ff14448e7\" (UID: \"d427c9eb-d93d-414b-bae8-456ff14448e7\") " Feb 19 10:46:24 crc kubenswrapper[4811]: I0219 10:46:24.633686 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d427c9eb-d93d-414b-bae8-456ff14448e7-utilities\") pod \"d427c9eb-d93d-414b-bae8-456ff14448e7\" (UID: \"d427c9eb-d93d-414b-bae8-456ff14448e7\") " Feb 19 10:46:24 crc kubenswrapper[4811]: I0219 10:46:24.633734 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4swv\" (UniqueName: \"kubernetes.io/projected/d427c9eb-d93d-414b-bae8-456ff14448e7-kube-api-access-c4swv\") pod \"d427c9eb-d93d-414b-bae8-456ff14448e7\" (UID: \"d427c9eb-d93d-414b-bae8-456ff14448e7\") " Feb 19 10:46:24 crc kubenswrapper[4811]: I0219 10:46:24.636434 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d427c9eb-d93d-414b-bae8-456ff14448e7-utilities" (OuterVolumeSpecName: "utilities") pod "d427c9eb-d93d-414b-bae8-456ff14448e7" (UID: "d427c9eb-d93d-414b-bae8-456ff14448e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:46:24 crc kubenswrapper[4811]: I0219 10:46:24.642909 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d427c9eb-d93d-414b-bae8-456ff14448e7-kube-api-access-c4swv" (OuterVolumeSpecName: "kube-api-access-c4swv") pod "d427c9eb-d93d-414b-bae8-456ff14448e7" (UID: "d427c9eb-d93d-414b-bae8-456ff14448e7"). InnerVolumeSpecName "kube-api-access-c4swv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:46:24 crc kubenswrapper[4811]: I0219 10:46:24.736690 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d427c9eb-d93d-414b-bae8-456ff14448e7-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:46:24 crc kubenswrapper[4811]: I0219 10:46:24.736736 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4swv\" (UniqueName: \"kubernetes.io/projected/d427c9eb-d93d-414b-bae8-456ff14448e7-kube-api-access-c4swv\") on node \"crc\" DevicePath \"\"" Feb 19 10:46:24 crc kubenswrapper[4811]: I0219 10:46:24.945167 4811 generic.go:334] "Generic (PLEG): container finished" podID="d427c9eb-d93d-414b-bae8-456ff14448e7" containerID="015f9a5cc7213a2dc0465318beaaa71cf4e57ce5b67fcc50b5f4cebadb0ae04a" exitCode=0 Feb 19 10:46:24 crc kubenswrapper[4811]: I0219 10:46:24.945232 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5s26f" event={"ID":"d427c9eb-d93d-414b-bae8-456ff14448e7","Type":"ContainerDied","Data":"015f9a5cc7213a2dc0465318beaaa71cf4e57ce5b67fcc50b5f4cebadb0ae04a"} Feb 19 10:46:24 crc kubenswrapper[4811]: I0219 10:46:24.945264 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5s26f" Feb 19 10:46:24 crc kubenswrapper[4811]: I0219 10:46:24.945291 4811 scope.go:117] "RemoveContainer" containerID="015f9a5cc7213a2dc0465318beaaa71cf4e57ce5b67fcc50b5f4cebadb0ae04a" Feb 19 10:46:24 crc kubenswrapper[4811]: I0219 10:46:24.945274 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5s26f" event={"ID":"d427c9eb-d93d-414b-bae8-456ff14448e7","Type":"ContainerDied","Data":"3cf60ddcc56089583fcf049539e0297355c26cff451d1a43db82ca51446844af"} Feb 19 10:46:24 crc kubenswrapper[4811]: I0219 10:46:24.968641 4811 scope.go:117] "RemoveContainer" containerID="790748e4c2f9eee9e68701c9c72e42703aca0c5de705187d850808cf0cc0dd0a" Feb 19 10:46:24 crc kubenswrapper[4811]: I0219 10:46:24.991199 4811 scope.go:117] "RemoveContainer" containerID="0097ff9893421996b4a742e480643e42419dd7053f0cc886d6c9fdca72918438" Feb 19 10:46:25 crc kubenswrapper[4811]: I0219 10:46:25.051929 4811 scope.go:117] "RemoveContainer" containerID="015f9a5cc7213a2dc0465318beaaa71cf4e57ce5b67fcc50b5f4cebadb0ae04a" Feb 19 10:46:25 crc kubenswrapper[4811]: E0219 10:46:25.052569 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"015f9a5cc7213a2dc0465318beaaa71cf4e57ce5b67fcc50b5f4cebadb0ae04a\": container with ID starting with 015f9a5cc7213a2dc0465318beaaa71cf4e57ce5b67fcc50b5f4cebadb0ae04a not found: ID does not exist" containerID="015f9a5cc7213a2dc0465318beaaa71cf4e57ce5b67fcc50b5f4cebadb0ae04a" Feb 19 10:46:25 crc kubenswrapper[4811]: I0219 10:46:25.052612 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"015f9a5cc7213a2dc0465318beaaa71cf4e57ce5b67fcc50b5f4cebadb0ae04a"} err="failed to get container status \"015f9a5cc7213a2dc0465318beaaa71cf4e57ce5b67fcc50b5f4cebadb0ae04a\": rpc error: code = NotFound desc = could not find container \"015f9a5cc7213a2dc0465318beaaa71cf4e57ce5b67fcc50b5f4cebadb0ae04a\": container with ID starting with 015f9a5cc7213a2dc0465318beaaa71cf4e57ce5b67fcc50b5f4cebadb0ae04a not found: ID does not exist" Feb 19 10:46:25 crc kubenswrapper[4811]: I0219 10:46:25.052641 4811 scope.go:117] "RemoveContainer" containerID="790748e4c2f9eee9e68701c9c72e42703aca0c5de705187d850808cf0cc0dd0a" Feb 19 10:46:25 crc kubenswrapper[4811]: E0219 10:46:25.052964 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"790748e4c2f9eee9e68701c9c72e42703aca0c5de705187d850808cf0cc0dd0a\": container with ID starting with 790748e4c2f9eee9e68701c9c72e42703aca0c5de705187d850808cf0cc0dd0a not found: ID does not exist" containerID="790748e4c2f9eee9e68701c9c72e42703aca0c5de705187d850808cf0cc0dd0a" Feb 19 10:46:25 crc kubenswrapper[4811]: I0219 10:46:25.052995 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"790748e4c2f9eee9e68701c9c72e42703aca0c5de705187d850808cf0cc0dd0a"} err="failed to get container status \"790748e4c2f9eee9e68701c9c72e42703aca0c5de705187d850808cf0cc0dd0a\": rpc error: code = NotFound desc = could not find container \"790748e4c2f9eee9e68701c9c72e42703aca0c5de705187d850808cf0cc0dd0a\": container with ID starting with 790748e4c2f9eee9e68701c9c72e42703aca0c5de705187d850808cf0cc0dd0a not found: ID does not exist" Feb 19 10:46:25 crc kubenswrapper[4811]: I0219 10:46:25.053016 4811 scope.go:117] "RemoveContainer" containerID="0097ff9893421996b4a742e480643e42419dd7053f0cc886d6c9fdca72918438" Feb 19 10:46:25 crc kubenswrapper[4811]: E0219 10:46:25.053369 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0097ff9893421996b4a742e480643e42419dd7053f0cc886d6c9fdca72918438\": container with ID starting with 0097ff9893421996b4a742e480643e42419dd7053f0cc886d6c9fdca72918438 not found: ID does not exist" containerID="0097ff9893421996b4a742e480643e42419dd7053f0cc886d6c9fdca72918438" Feb 19 10:46:25 crc kubenswrapper[4811]: I0219 10:46:25.053394 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0097ff9893421996b4a742e480643e42419dd7053f0cc886d6c9fdca72918438"} err="failed to get container status \"0097ff9893421996b4a742e480643e42419dd7053f0cc886d6c9fdca72918438\": rpc error: code = NotFound desc = could not find container \"0097ff9893421996b4a742e480643e42419dd7053f0cc886d6c9fdca72918438\": container with ID starting with 0097ff9893421996b4a742e480643e42419dd7053f0cc886d6c9fdca72918438 not found: ID does not exist" Feb 19 10:46:25 crc kubenswrapper[4811]: I0219 10:46:25.165540 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d427c9eb-d93d-414b-bae8-456ff14448e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d427c9eb-d93d-414b-bae8-456ff14448e7" (UID: "d427c9eb-d93d-414b-bae8-456ff14448e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:46:25 crc kubenswrapper[4811]: I0219 10:46:25.248730 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d427c9eb-d93d-414b-bae8-456ff14448e7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:46:25 crc kubenswrapper[4811]: I0219 10:46:25.286175 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5s26f"] Feb 19 10:46:25 crc kubenswrapper[4811]: I0219 10:46:25.298865 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5s26f"] Feb 19 10:46:26 crc kubenswrapper[4811]: I0219 10:46:26.597971 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d427c9eb-d93d-414b-bae8-456ff14448e7" path="/var/lib/kubelet/pods/d427c9eb-d93d-414b-bae8-456ff14448e7/volumes" Feb 19 10:46:47 crc kubenswrapper[4811]: E0219 10:46:47.830495 4811 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.248s" Feb 19 10:47:08 crc kubenswrapper[4811]: I0219 10:47:08.093850 4811 generic.go:334] "Generic (PLEG): container finished" podID="177810e6-4482-4c82-ac9a-721e76c63579" containerID="bccbb5a4cc57770fa9050adaf671fcab3a2690351fb27b36e853e78c7b785d99" exitCode=0 Feb 19 10:47:08 crc kubenswrapper[4811]: I0219 10:47:08.094605 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-flfd2" event={"ID":"177810e6-4482-4c82-ac9a-721e76c63579","Type":"ContainerDied","Data":"bccbb5a4cc57770fa9050adaf671fcab3a2690351fb27b36e853e78c7b785d99"} Feb 19 10:47:09 crc kubenswrapper[4811]: I0219 10:47:09.588224 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-flfd2" Feb 19 10:47:09 crc kubenswrapper[4811]: I0219 10:47:09.764396 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjxnl\" (UniqueName: \"kubernetes.io/projected/177810e6-4482-4c82-ac9a-721e76c63579-kube-api-access-cjxnl\") pod \"177810e6-4482-4c82-ac9a-721e76c63579\" (UID: \"177810e6-4482-4c82-ac9a-721e76c63579\") " Feb 19 10:47:09 crc kubenswrapper[4811]: I0219 10:47:09.764684 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/177810e6-4482-4c82-ac9a-721e76c63579-libvirt-secret-0\") pod \"177810e6-4482-4c82-ac9a-721e76c63579\" (UID: \"177810e6-4482-4c82-ac9a-721e76c63579\") " Feb 19 10:47:09 crc kubenswrapper[4811]: I0219 10:47:09.764829 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/177810e6-4482-4c82-ac9a-721e76c63579-ssh-key-openstack-edpm-ipam\") pod \"177810e6-4482-4c82-ac9a-721e76c63579\" (UID: \"177810e6-4482-4c82-ac9a-721e76c63579\") " Feb 19 10:47:09 crc kubenswrapper[4811]: I0219 10:47:09.764884 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/177810e6-4482-4c82-ac9a-721e76c63579-inventory\") pod \"177810e6-4482-4c82-ac9a-721e76c63579\" (UID: \"177810e6-4482-4c82-ac9a-721e76c63579\") " Feb 19 10:47:09 crc kubenswrapper[4811]: I0219 10:47:09.765065 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/177810e6-4482-4c82-ac9a-721e76c63579-libvirt-combined-ca-bundle\") pod \"177810e6-4482-4c82-ac9a-721e76c63579\" (UID: \"177810e6-4482-4c82-ac9a-721e76c63579\") " Feb 19 10:47:09 crc kubenswrapper[4811]: I0219 10:47:09.773193 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/177810e6-4482-4c82-ac9a-721e76c63579-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "177810e6-4482-4c82-ac9a-721e76c63579" (UID: "177810e6-4482-4c82-ac9a-721e76c63579"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:47:09 crc kubenswrapper[4811]: I0219 10:47:09.780907 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/177810e6-4482-4c82-ac9a-721e76c63579-kube-api-access-cjxnl" (OuterVolumeSpecName: "kube-api-access-cjxnl") pod "177810e6-4482-4c82-ac9a-721e76c63579" (UID: "177810e6-4482-4c82-ac9a-721e76c63579"). InnerVolumeSpecName "kube-api-access-cjxnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:47:09 crc kubenswrapper[4811]: I0219 10:47:09.799213 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/177810e6-4482-4c82-ac9a-721e76c63579-inventory" (OuterVolumeSpecName: "inventory") pod "177810e6-4482-4c82-ac9a-721e76c63579" (UID: "177810e6-4482-4c82-ac9a-721e76c63579"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:47:09 crc kubenswrapper[4811]: I0219 10:47:09.800305 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/177810e6-4482-4c82-ac9a-721e76c63579-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "177810e6-4482-4c82-ac9a-721e76c63579" (UID: "177810e6-4482-4c82-ac9a-721e76c63579"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:47:09 crc kubenswrapper[4811]: I0219 10:47:09.803354 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/177810e6-4482-4c82-ac9a-721e76c63579-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "177810e6-4482-4c82-ac9a-721e76c63579" (UID: "177810e6-4482-4c82-ac9a-721e76c63579"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:47:09 crc kubenswrapper[4811]: I0219 10:47:09.867415 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/177810e6-4482-4c82-ac9a-721e76c63579-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:47:09 crc kubenswrapper[4811]: I0219 10:47:09.867492 4811 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/177810e6-4482-4c82-ac9a-721e76c63579-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:47:09 crc kubenswrapper[4811]: I0219 10:47:09.867506 4811 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/177810e6-4482-4c82-ac9a-721e76c63579-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:47:09 crc kubenswrapper[4811]: I0219 10:47:09.867519 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjxnl\" (UniqueName: \"kubernetes.io/projected/177810e6-4482-4c82-ac9a-721e76c63579-kube-api-access-cjxnl\") on node \"crc\" DevicePath \"\"" Feb 19 10:47:09 crc kubenswrapper[4811]: I0219 10:47:09.867534 4811 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/177810e6-4482-4c82-ac9a-721e76c63579-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.130231 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-flfd2" event={"ID":"177810e6-4482-4c82-ac9a-721e76c63579","Type":"ContainerDied","Data":"6ca044e4729c2efab085dc27b9a83780ec020a74201dd3e93fa4a9c458120648"} Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.130285 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ca044e4729c2efab085dc27b9a83780ec020a74201dd3e93fa4a9c458120648" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.130397 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-flfd2" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.232724 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr"] Feb 19 10:47:10 crc kubenswrapper[4811]: E0219 10:47:10.233357 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d427c9eb-d93d-414b-bae8-456ff14448e7" containerName="extract-utilities" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.233385 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="d427c9eb-d93d-414b-bae8-456ff14448e7" containerName="extract-utilities" Feb 19 10:47:10 crc kubenswrapper[4811]: E0219 10:47:10.233409 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d427c9eb-d93d-414b-bae8-456ff14448e7" containerName="registry-server" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.233417 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="d427c9eb-d93d-414b-bae8-456ff14448e7" containerName="registry-server" Feb 19 10:47:10 crc kubenswrapper[4811]: E0219 10:47:10.233486 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d427c9eb-d93d-414b-bae8-456ff14448e7" containerName="extract-content" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.233495 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="d427c9eb-d93d-414b-bae8-456ff14448e7" containerName="extract-content" Feb 19 10:47:10 crc kubenswrapper[4811]: E0219 10:47:10.233518 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="177810e6-4482-4c82-ac9a-721e76c63579" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.233526 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="177810e6-4482-4c82-ac9a-721e76c63579" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.233788 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="d427c9eb-d93d-414b-bae8-456ff14448e7" containerName="registry-server" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.233805 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="177810e6-4482-4c82-ac9a-721e76c63579" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.234675 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.240245 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.240292 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.240302 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.240418 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wtknk" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.241148 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.241179 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.241719 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.254260 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr"] Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.280374 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6djpr\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.281013 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6djpr\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.281278 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6djpr\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.281718 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6djpr\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.281846 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6djpr\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.281938 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6djpr\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.282030 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snxxt\" (UniqueName: \"kubernetes.io/projected/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-kube-api-access-snxxt\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6djpr\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.282263 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6djpr\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.282631 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6djpr\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.282707 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6djpr\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.282889 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6djpr\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.384478 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6djpr\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.384864 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6djpr\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.384978 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6djpr\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.385081 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6djpr\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.385172 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6djpr\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.385288 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6djpr\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.385377 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6djpr\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.385487 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snxxt\" (UniqueName: \"kubernetes.io/projected/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-kube-api-access-snxxt\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6djpr\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.385597 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6djpr\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.385696 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6djpr\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.385776 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6djpr\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.386531 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6djpr\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.388689 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6djpr\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.389621 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6djpr\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.389688 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6djpr\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.391011 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6djpr\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.391994 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6djpr\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.393124 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6djpr\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.393164 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6djpr\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.393824 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6djpr\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.393965 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6djpr\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.407406 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snxxt\" (UniqueName: \"kubernetes.io/projected/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-kube-api-access-snxxt\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6djpr\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" Feb 19 10:47:10 crc kubenswrapper[4811]: I0219 10:47:10.557965 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" Feb 19 10:47:11 crc kubenswrapper[4811]: I0219 10:47:11.143032 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr"] Feb 19 10:47:12 crc kubenswrapper[4811]: I0219 10:47:12.151316 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" event={"ID":"3aebcdfb-c20a-447c-9b30-fa8f749bbf90","Type":"ContainerStarted","Data":"838adb75e9a08afa949f78cd767ce3c176f073c928ba8c5aa38d597e3fe54b4f"} Feb 19 10:47:13 crc kubenswrapper[4811]: I0219 10:47:13.166747 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" event={"ID":"3aebcdfb-c20a-447c-9b30-fa8f749bbf90","Type":"ContainerStarted","Data":"7c8e0f837a83e8c50310be36a2e5ad1ace5bddefe3f25de1e667b92102b73491"} Feb 19 10:47:13 crc kubenswrapper[4811]: I0219 10:47:13.197080 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" podStartSLOduration=2.472944653 podStartE2EDuration="3.197046348s" podCreationTimestamp="2026-02-19 10:47:10 +0000 UTC" firstStartedPulling="2026-02-19 10:47:11.151816646 +0000 UTC m=+2319.678281332" lastFinishedPulling="2026-02-19 10:47:11.875918341 +0000 UTC m=+2320.402383027" observedRunningTime="2026-02-19 10:47:13.18891009 +0000 UTC m=+2321.715374776" watchObservedRunningTime="2026-02-19 10:47:13.197046348 +0000 UTC m=+2321.723511034" Feb 19 10:47:26 crc kubenswrapper[4811]: I0219 10:47:26.791597 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:47:26 crc kubenswrapper[4811]: I0219 10:47:26.792719 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:47:27 crc kubenswrapper[4811]: I0219 10:47:27.933297 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-666bdb77b9-x6mrc" podUID="0beeae9e-3b17-4691-8273-95fbc4be85fb" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 19 10:47:56 crc kubenswrapper[4811]: I0219 10:47:56.788321 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:47:56 crc kubenswrapper[4811]: I0219 10:47:56.789287 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:48:26 crc kubenswrapper[4811]: I0219 10:48:26.787909 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:48:26 crc kubenswrapper[4811]: I0219 10:48:26.788878 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:48:26 crc kubenswrapper[4811]: I0219 10:48:26.788946 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" Feb 19 10:48:26 crc kubenswrapper[4811]: I0219 10:48:26.790088 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d39fec6eee1a4776c6f310ef7d9dd213a092ee3da120aad38ee64de4b6939d2d"} pod="openshift-machine-config-operator/machine-config-daemon-v97zm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:48:26 crc kubenswrapper[4811]: I0219 10:48:26.790167 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" containerID="cri-o://d39fec6eee1a4776c6f310ef7d9dd213a092ee3da120aad38ee64de4b6939d2d" gracePeriod=600 Feb 19 10:48:26 crc kubenswrapper[4811]: E0219 10:48:26.915496 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:48:26 crc kubenswrapper[4811]: I0219 10:48:26.961186 4811 generic.go:334] "Generic (PLEG): container finished" podID="e2e42c80-bece-4066-abad-05bc661d33f5" containerID="d39fec6eee1a4776c6f310ef7d9dd213a092ee3da120aad38ee64de4b6939d2d" exitCode=0 Feb 19 10:48:26 crc kubenswrapper[4811]: I0219 10:48:26.961254 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" event={"ID":"e2e42c80-bece-4066-abad-05bc661d33f5","Type":"ContainerDied","Data":"d39fec6eee1a4776c6f310ef7d9dd213a092ee3da120aad38ee64de4b6939d2d"} Feb 19 10:48:26 crc kubenswrapper[4811]: I0219 10:48:26.961335 4811 scope.go:117] "RemoveContainer" containerID="d7201f4dbe99dc72ed85368aab06db50b1d5d39c74f4d3f0e181888c8cdf80c5" Feb 19 10:48:26 crc kubenswrapper[4811]: I0219 10:48:26.962207 4811 scope.go:117] "RemoveContainer" containerID="d39fec6eee1a4776c6f310ef7d9dd213a092ee3da120aad38ee64de4b6939d2d" Feb 19 10:48:26 crc kubenswrapper[4811]: E0219 10:48:26.962522 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:48:40 crc kubenswrapper[4811]: I0219 10:48:40.584101 4811 scope.go:117] "RemoveContainer" containerID="d39fec6eee1a4776c6f310ef7d9dd213a092ee3da120aad38ee64de4b6939d2d" Feb 19 10:48:40 crc kubenswrapper[4811]: E0219 10:48:40.585563 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:48:55 crc kubenswrapper[4811]: I0219 10:48:55.583369 4811 scope.go:117] "RemoveContainer" containerID="d39fec6eee1a4776c6f310ef7d9dd213a092ee3da120aad38ee64de4b6939d2d" Feb 19 10:48:55 crc kubenswrapper[4811]: E0219 10:48:55.584633 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:49:09 crc kubenswrapper[4811]: I0219 10:49:09.583835 4811 scope.go:117] "RemoveContainer" containerID="d39fec6eee1a4776c6f310ef7d9dd213a092ee3da120aad38ee64de4b6939d2d" Feb 19 10:49:09 crc kubenswrapper[4811]: E0219 10:49:09.585686 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:49:22 crc kubenswrapper[4811]: I0219 10:49:22.593772 4811 scope.go:117] "RemoveContainer" containerID="d39fec6eee1a4776c6f310ef7d9dd213a092ee3da120aad38ee64de4b6939d2d" Feb 19 10:49:22 crc kubenswrapper[4811]: E0219 10:49:22.597904 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:49:29 crc kubenswrapper[4811]: I0219 10:49:29.616149 4811 generic.go:334] "Generic (PLEG): container finished" podID="3aebcdfb-c20a-447c-9b30-fa8f749bbf90" containerID="7c8e0f837a83e8c50310be36a2e5ad1ace5bddefe3f25de1e667b92102b73491" exitCode=0 Feb 19 10:49:29 crc kubenswrapper[4811]: I0219 10:49:29.616547 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" event={"ID":"3aebcdfb-c20a-447c-9b30-fa8f749bbf90","Type":"ContainerDied","Data":"7c8e0f837a83e8c50310be36a2e5ad1ace5bddefe3f25de1e667b92102b73491"} Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.066181 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.139661 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-extra-config-0\") pod \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.139769 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-cell1-compute-config-3\") pod \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.139857 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-inventory\") pod \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.139996 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-cell1-compute-config-0\") pod \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.140032 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snxxt\" (UniqueName: \"kubernetes.io/projected/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-kube-api-access-snxxt\") pod \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.140062 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-cell1-compute-config-1\") pod \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.140105 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-migration-ssh-key-1\") pod \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.140999 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-combined-ca-bundle\") pod \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.141029 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-cell1-compute-config-2\") pod \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.141093 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-ssh-key-openstack-edpm-ipam\") pod \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.141133 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-migration-ssh-key-0\") pod \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\" (UID: \"3aebcdfb-c20a-447c-9b30-fa8f749bbf90\") " Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.147575 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-kube-api-access-snxxt" (OuterVolumeSpecName: "kube-api-access-snxxt") pod "3aebcdfb-c20a-447c-9b30-fa8f749bbf90" (UID: "3aebcdfb-c20a-447c-9b30-fa8f749bbf90"). InnerVolumeSpecName "kube-api-access-snxxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.154805 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "3aebcdfb-c20a-447c-9b30-fa8f749bbf90" (UID: "3aebcdfb-c20a-447c-9b30-fa8f749bbf90"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.168585 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "3aebcdfb-c20a-447c-9b30-fa8f749bbf90" (UID: "3aebcdfb-c20a-447c-9b30-fa8f749bbf90"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.172378 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "3aebcdfb-c20a-447c-9b30-fa8f749bbf90" (UID: "3aebcdfb-c20a-447c-9b30-fa8f749bbf90"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.173139 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "3aebcdfb-c20a-447c-9b30-fa8f749bbf90" (UID: "3aebcdfb-c20a-447c-9b30-fa8f749bbf90"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.173788 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-inventory" (OuterVolumeSpecName: "inventory") pod "3aebcdfb-c20a-447c-9b30-fa8f749bbf90" (UID: "3aebcdfb-c20a-447c-9b30-fa8f749bbf90"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.176136 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "3aebcdfb-c20a-447c-9b30-fa8f749bbf90" (UID: "3aebcdfb-c20a-447c-9b30-fa8f749bbf90"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.179225 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "3aebcdfb-c20a-447c-9b30-fa8f749bbf90" (UID: "3aebcdfb-c20a-447c-9b30-fa8f749bbf90"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.186777 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "3aebcdfb-c20a-447c-9b30-fa8f749bbf90" (UID: "3aebcdfb-c20a-447c-9b30-fa8f749bbf90"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.186910 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "3aebcdfb-c20a-447c-9b30-fa8f749bbf90" (UID: "3aebcdfb-c20a-447c-9b30-fa8f749bbf90"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.190844 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3aebcdfb-c20a-447c-9b30-fa8f749bbf90" (UID: "3aebcdfb-c20a-447c-9b30-fa8f749bbf90"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.243090 4811 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.243211 4811 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.243283 4811 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.243338 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snxxt\" (UniqueName: \"kubernetes.io/projected/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-kube-api-access-snxxt\") on node \"crc\" DevicePath \"\"" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.243391 4811 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.243443 4811 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.243521 4811 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.243579 4811 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.243683 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.243764 4811 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.243823 4811 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/3aebcdfb-c20a-447c-9b30-fa8f749bbf90-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.642310 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" event={"ID":"3aebcdfb-c20a-447c-9b30-fa8f749bbf90","Type":"ContainerDied","Data":"838adb75e9a08afa949f78cd767ce3c176f073c928ba8c5aa38d597e3fe54b4f"} Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.642791 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="838adb75e9a08afa949f78cd767ce3c176f073c928ba8c5aa38d597e3fe54b4f" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.642489 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6djpr" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.756898 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx"] Feb 19 10:49:31 crc kubenswrapper[4811]: E0219 10:49:31.757380 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aebcdfb-c20a-447c-9b30-fa8f749bbf90" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.757400 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aebcdfb-c20a-447c-9b30-fa8f749bbf90" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.757635 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aebcdfb-c20a-447c-9b30-fa8f749bbf90" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.758326 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.761801 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.761939 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.762210 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wtknk" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.768018 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.772431 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx"] Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.774089 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.856372 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx\" (UID: \"2017b43a-f947-4d28-97b7-5cf3d5d0cc1b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.856471 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx\" (UID: \"2017b43a-f947-4d28-97b7-5cf3d5d0cc1b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.856497 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx\" (UID: \"2017b43a-f947-4d28-97b7-5cf3d5d0cc1b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.856533 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2p5n\" (UniqueName: \"kubernetes.io/projected/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-kube-api-access-c2p5n\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx\" (UID: \"2017b43a-f947-4d28-97b7-5cf3d5d0cc1b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.856697 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx\" (UID: \"2017b43a-f947-4d28-97b7-5cf3d5d0cc1b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.856753 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx\" (UID: \"2017b43a-f947-4d28-97b7-5cf3d5d0cc1b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.856778 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx\" (UID: \"2017b43a-f947-4d28-97b7-5cf3d5d0cc1b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.957807 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx\" (UID: \"2017b43a-f947-4d28-97b7-5cf3d5d0cc1b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.957868 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx\" (UID: \"2017b43a-f947-4d28-97b7-5cf3d5d0cc1b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.957951 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx\" (UID: \"2017b43a-f947-4d28-97b7-5cf3d5d0cc1b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.957996 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx\" (UID: \"2017b43a-f947-4d28-97b7-5cf3d5d0cc1b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.958022 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx\" (UID: \"2017b43a-f947-4d28-97b7-5cf3d5d0cc1b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.958063 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2p5n\" (UniqueName: \"kubernetes.io/projected/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-kube-api-access-c2p5n\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx\" (UID: \"2017b43a-f947-4d28-97b7-5cf3d5d0cc1b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.958116 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx\" (UID: \"2017b43a-f947-4d28-97b7-5cf3d5d0cc1b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.963536 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx\" (UID: \"2017b43a-f947-4d28-97b7-5cf3d5d0cc1b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.963543 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx\" (UID: \"2017b43a-f947-4d28-97b7-5cf3d5d0cc1b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.964401 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx\" (UID: \"2017b43a-f947-4d28-97b7-5cf3d5d0cc1b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.964625 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx\" (UID: \"2017b43a-f947-4d28-97b7-5cf3d5d0cc1b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.965353 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx\" (UID: \"2017b43a-f947-4d28-97b7-5cf3d5d0cc1b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.973171 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx\" (UID: \"2017b43a-f947-4d28-97b7-5cf3d5d0cc1b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx" Feb 19 10:49:31 crc kubenswrapper[4811]: I0219 10:49:31.978233 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2p5n\" (UniqueName: \"kubernetes.io/projected/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-kube-api-access-c2p5n\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx\" (UID: \"2017b43a-f947-4d28-97b7-5cf3d5d0cc1b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx" Feb 19 10:49:32 crc kubenswrapper[4811]: I0219 10:49:32.081634 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx" Feb 19 10:49:32 crc kubenswrapper[4811]: I0219 10:49:32.628015 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx"] Feb 19 10:49:32 crc kubenswrapper[4811]: I0219 10:49:32.654821 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx" event={"ID":"2017b43a-f947-4d28-97b7-5cf3d5d0cc1b","Type":"ContainerStarted","Data":"478c171eeda4b78895e392990f5d7de0d491f45279fea65f78bcef361daa8403"} Feb 19 10:49:33 crc kubenswrapper[4811]: I0219 10:49:33.090967 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 10:49:33 crc kubenswrapper[4811]: I0219 10:49:33.585085 4811 scope.go:117] "RemoveContainer" containerID="d39fec6eee1a4776c6f310ef7d9dd213a092ee3da120aad38ee64de4b6939d2d" Feb 19 10:49:33 crc kubenswrapper[4811]: E0219 10:49:33.586466 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:49:33 crc kubenswrapper[4811]: I0219 10:49:33.667725 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx" event={"ID":"2017b43a-f947-4d28-97b7-5cf3d5d0cc1b","Type":"ContainerStarted","Data":"c10f6464d73f15fdfff3669c95357ef4b74a2d66cf7824366d3c0f71ceec42c8"} Feb 19 10:49:33 crc kubenswrapper[4811]: I0219 10:49:33.693004 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx" podStartSLOduration=2.240768081 podStartE2EDuration="2.692973901s" podCreationTimestamp="2026-02-19 10:49:31 +0000 UTC" firstStartedPulling="2026-02-19 10:49:32.634673896 +0000 UTC m=+2461.161138582" lastFinishedPulling="2026-02-19 10:49:33.086879696 +0000 UTC m=+2461.613344402" observedRunningTime="2026-02-19 10:49:33.692067808 +0000 UTC m=+2462.218532494" watchObservedRunningTime="2026-02-19 10:49:33.692973901 +0000 UTC m=+2462.219438587" Feb 19 10:49:48 crc kubenswrapper[4811]: I0219 10:49:48.584976 4811 scope.go:117] "RemoveContainer" containerID="d39fec6eee1a4776c6f310ef7d9dd213a092ee3da120aad38ee64de4b6939d2d" Feb 19 10:49:48 crc kubenswrapper[4811]: E0219 10:49:48.586308 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:50:03 crc kubenswrapper[4811]: I0219 10:50:03.583291 4811 scope.go:117] "RemoveContainer" containerID="d39fec6eee1a4776c6f310ef7d9dd213a092ee3da120aad38ee64de4b6939d2d" Feb 19 10:50:03 crc kubenswrapper[4811]: E0219 10:50:03.584285 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:50:15 crc kubenswrapper[4811]: I0219 10:50:15.584707 4811 scope.go:117] "RemoveContainer" containerID="d39fec6eee1a4776c6f310ef7d9dd213a092ee3da120aad38ee64de4b6939d2d" Feb 19 10:50:15 crc kubenswrapper[4811]: E0219 10:50:15.585973 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:50:26 crc kubenswrapper[4811]: I0219 10:50:26.583210 4811 scope.go:117] "RemoveContainer" containerID="d39fec6eee1a4776c6f310ef7d9dd213a092ee3da120aad38ee64de4b6939d2d" Feb 19 10:50:26 crc kubenswrapper[4811]: E0219 10:50:26.584410 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:50:41 crc kubenswrapper[4811]: I0219 10:50:41.583989 4811 scope.go:117] "RemoveContainer" containerID="d39fec6eee1a4776c6f310ef7d9dd213a092ee3da120aad38ee64de4b6939d2d" Feb 19 10:50:41 crc kubenswrapper[4811]: E0219 10:50:41.585265 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:50:54 crc kubenswrapper[4811]: I0219 10:50:54.583617 4811 scope.go:117] "RemoveContainer" containerID="d39fec6eee1a4776c6f310ef7d9dd213a092ee3da120aad38ee64de4b6939d2d" Feb 19 10:50:54 crc kubenswrapper[4811]: E0219 10:50:54.584756 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:51:03 crc kubenswrapper[4811]: I0219 10:51:03.033509 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x7wbs"] Feb 19 10:51:03 crc kubenswrapper[4811]: I0219 10:51:03.038930 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7wbs" Feb 19 10:51:03 crc kubenswrapper[4811]: I0219 10:51:03.053649 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x7wbs"] Feb 19 10:51:03 crc kubenswrapper[4811]: I0219 10:51:03.219871 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360-catalog-content\") pod \"redhat-operators-x7wbs\" (UID: \"c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360\") " pod="openshift-marketplace/redhat-operators-x7wbs" Feb 19 10:51:03 crc kubenswrapper[4811]: I0219 10:51:03.219963 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360-utilities\") pod \"redhat-operators-x7wbs\" (UID: \"c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360\") " pod="openshift-marketplace/redhat-operators-x7wbs" Feb 19 10:51:03 crc kubenswrapper[4811]: I0219 10:51:03.220206 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpdr9\" (UniqueName: \"kubernetes.io/projected/c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360-kube-api-access-gpdr9\") pod \"redhat-operators-x7wbs\" (UID: \"c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360\") " pod="openshift-marketplace/redhat-operators-x7wbs" Feb 19 10:51:03 crc kubenswrapper[4811]: I0219 10:51:03.323187 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360-catalog-content\") pod \"redhat-operators-x7wbs\" (UID: \"c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360\") " pod="openshift-marketplace/redhat-operators-x7wbs" Feb 19 10:51:03 crc kubenswrapper[4811]: I0219 10:51:03.323265 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360-utilities\") pod \"redhat-operators-x7wbs\" (UID: \"c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360\") " pod="openshift-marketplace/redhat-operators-x7wbs" Feb 19 10:51:03 crc kubenswrapper[4811]: I0219 10:51:03.323321 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpdr9\" (UniqueName: \"kubernetes.io/projected/c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360-kube-api-access-gpdr9\") pod \"redhat-operators-x7wbs\" (UID: \"c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360\") " pod="openshift-marketplace/redhat-operators-x7wbs" Feb 19 10:51:03 crc kubenswrapper[4811]: I0219 10:51:03.324090 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360-catalog-content\") pod \"redhat-operators-x7wbs\" (UID: \"c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360\") " pod="openshift-marketplace/redhat-operators-x7wbs" Feb 19 10:51:03 crc kubenswrapper[4811]: I0219 10:51:03.324146 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360-utilities\") pod \"redhat-operators-x7wbs\" (UID: \"c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360\") " pod="openshift-marketplace/redhat-operators-x7wbs" Feb 19 10:51:03 crc kubenswrapper[4811]: I0219 10:51:03.349487 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpdr9\" (UniqueName: \"kubernetes.io/projected/c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360-kube-api-access-gpdr9\") pod \"redhat-operators-x7wbs\" (UID: \"c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360\") " pod="openshift-marketplace/redhat-operators-x7wbs" Feb 19 10:51:03 crc kubenswrapper[4811]: I0219 10:51:03.387292 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7wbs" Feb 19 10:51:03 crc kubenswrapper[4811]: I0219 10:51:03.946305 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x7wbs"] Feb 19 10:51:04 crc kubenswrapper[4811]: I0219 10:51:04.608335 4811 generic.go:334] "Generic (PLEG): container finished" podID="c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360" containerID="f28678569180039bc8c4c72af06c4cded66bb0e1da5e7f147a859d30dbcf8c5c" exitCode=0 Feb 19 10:51:04 crc kubenswrapper[4811]: I0219 10:51:04.608460 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7wbs" event={"ID":"c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360","Type":"ContainerDied","Data":"f28678569180039bc8c4c72af06c4cded66bb0e1da5e7f147a859d30dbcf8c5c"} Feb 19 10:51:04 crc kubenswrapper[4811]: I0219 10:51:04.608501 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7wbs" event={"ID":"c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360","Type":"ContainerStarted","Data":"916373d2c2119cb0d428a30d58f7459c5a46e01e521630be31a1969aeae48953"} Feb 19 10:51:06 crc kubenswrapper[4811]: I0219 10:51:06.644718 4811 generic.go:334] "Generic (PLEG): container finished" podID="c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360" containerID="766e85575611ca47635837bb2fe729bcde5bb1a1d4be6dfbae11079ce5fc3b2d" exitCode=0 Feb 19 10:51:06 crc kubenswrapper[4811]: I0219 10:51:06.645693 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7wbs" event={"ID":"c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360","Type":"ContainerDied","Data":"766e85575611ca47635837bb2fe729bcde5bb1a1d4be6dfbae11079ce5fc3b2d"} Feb 19 10:51:07 crc kubenswrapper[4811]: I0219 10:51:07.658987 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7wbs" event={"ID":"c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360","Type":"ContainerStarted","Data":"82350fb444d1f80d21e054e9a5371359f8b8d12a526b15da7a49045fdea6b97c"} Feb 19 10:51:07 crc kubenswrapper[4811]: I0219 10:51:07.691010 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x7wbs" podStartSLOduration=2.278689255 podStartE2EDuration="4.690986212s" podCreationTimestamp="2026-02-19 10:51:03 +0000 UTC" firstStartedPulling="2026-02-19 10:51:04.611406579 +0000 UTC m=+2553.137871265" lastFinishedPulling="2026-02-19 10:51:07.023703536 +0000 UTC m=+2555.550168222" observedRunningTime="2026-02-19 10:51:07.684815308 +0000 UTC m=+2556.211280034" watchObservedRunningTime="2026-02-19 10:51:07.690986212 +0000 UTC m=+2556.217450908" Feb 19 10:51:08 crc kubenswrapper[4811]: I0219 10:51:08.583878 4811 scope.go:117] "RemoveContainer" containerID="d39fec6eee1a4776c6f310ef7d9dd213a092ee3da120aad38ee64de4b6939d2d" Feb 19 10:51:08 crc kubenswrapper[4811]: E0219 10:51:08.584638 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:51:09 crc kubenswrapper[4811]: I0219 10:51:09.004441 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2s9s7"] Feb 19 10:51:09 crc kubenswrapper[4811]: I0219 10:51:09.008080 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2s9s7" Feb 19 10:51:09 crc kubenswrapper[4811]: I0219 10:51:09.021207 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2s9s7"] Feb 19 10:51:09 crc kubenswrapper[4811]: I0219 10:51:09.184348 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b433a52-5266-4467-9b49-b398645688ce-utilities\") pod \"community-operators-2s9s7\" (UID: \"3b433a52-5266-4467-9b49-b398645688ce\") " pod="openshift-marketplace/community-operators-2s9s7" Feb 19 10:51:09 crc kubenswrapper[4811]: I0219 10:51:09.184544 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vqtt\" (UniqueName: \"kubernetes.io/projected/3b433a52-5266-4467-9b49-b398645688ce-kube-api-access-7vqtt\") pod \"community-operators-2s9s7\" (UID: \"3b433a52-5266-4467-9b49-b398645688ce\") " pod="openshift-marketplace/community-operators-2s9s7" Feb 19 10:51:09 crc kubenswrapper[4811]: I0219 10:51:09.184620 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b433a52-5266-4467-9b49-b398645688ce-catalog-content\") pod \"community-operators-2s9s7\" (UID: \"3b433a52-5266-4467-9b49-b398645688ce\") " pod="openshift-marketplace/community-operators-2s9s7" Feb 19 10:51:09 crc kubenswrapper[4811]: I0219 10:51:09.286623 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b433a52-5266-4467-9b49-b398645688ce-utilities\") pod \"community-operators-2s9s7\" (UID: \"3b433a52-5266-4467-9b49-b398645688ce\") " pod="openshift-marketplace/community-operators-2s9s7" Feb 19 10:51:09 crc kubenswrapper[4811]: I0219 10:51:09.286801 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vqtt\" (UniqueName: \"kubernetes.io/projected/3b433a52-5266-4467-9b49-b398645688ce-kube-api-access-7vqtt\") pod \"community-operators-2s9s7\" (UID: \"3b433a52-5266-4467-9b49-b398645688ce\") " pod="openshift-marketplace/community-operators-2s9s7" Feb 19 10:51:09 crc kubenswrapper[4811]: I0219 10:51:09.286894 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b433a52-5266-4467-9b49-b398645688ce-catalog-content\") pod \"community-operators-2s9s7\" (UID: \"3b433a52-5266-4467-9b49-b398645688ce\") " pod="openshift-marketplace/community-operators-2s9s7" Feb 19 10:51:09 crc kubenswrapper[4811]: I0219 10:51:09.287501 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b433a52-5266-4467-9b49-b398645688ce-utilities\") pod \"community-operators-2s9s7\" (UID: \"3b433a52-5266-4467-9b49-b398645688ce\") " pod="openshift-marketplace/community-operators-2s9s7" Feb 19 10:51:09 crc kubenswrapper[4811]: I0219 10:51:09.287563 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b433a52-5266-4467-9b49-b398645688ce-catalog-content\") pod \"community-operators-2s9s7\" (UID: \"3b433a52-5266-4467-9b49-b398645688ce\") " pod="openshift-marketplace/community-operators-2s9s7" Feb 19 10:51:09 crc kubenswrapper[4811]: I0219 10:51:09.310354 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vqtt\" (UniqueName: \"kubernetes.io/projected/3b433a52-5266-4467-9b49-b398645688ce-kube-api-access-7vqtt\") pod \"community-operators-2s9s7\" (UID: \"3b433a52-5266-4467-9b49-b398645688ce\") " pod="openshift-marketplace/community-operators-2s9s7" Feb 19 10:51:09 crc kubenswrapper[4811]: I0219 10:51:09.330546 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2s9s7" Feb 19 10:51:09 crc kubenswrapper[4811]: I0219 10:51:09.923525 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2s9s7"] Feb 19 10:51:09 crc kubenswrapper[4811]: W0219 10:51:09.942133 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b433a52_5266_4467_9b49_b398645688ce.slice/crio-558bbd99002a07279908f38e147f0e88f4229ca1bd0d346905e1c4e08742ec9f WatchSource:0}: Error finding container 558bbd99002a07279908f38e147f0e88f4229ca1bd0d346905e1c4e08742ec9f: Status 404 returned error can't find the container with id 558bbd99002a07279908f38e147f0e88f4229ca1bd0d346905e1c4e08742ec9f Feb 19 10:51:10 crc kubenswrapper[4811]: I0219 10:51:10.693225 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2s9s7" event={"ID":"3b433a52-5266-4467-9b49-b398645688ce","Type":"ContainerStarted","Data":"41ffb4134c532bfa7334db1b9759477ce410b8b1c98ae43544d00c69fec92f31"} Feb 19 10:51:10 crc kubenswrapper[4811]: I0219 10:51:10.693842 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2s9s7" event={"ID":"3b433a52-5266-4467-9b49-b398645688ce","Type":"ContainerStarted","Data":"558bbd99002a07279908f38e147f0e88f4229ca1bd0d346905e1c4e08742ec9f"} Feb 19 10:51:12 crc kubenswrapper[4811]: I0219 10:51:12.719414 4811 generic.go:334] "Generic (PLEG): container finished" podID="3b433a52-5266-4467-9b49-b398645688ce" containerID="41ffb4134c532bfa7334db1b9759477ce410b8b1c98ae43544d00c69fec92f31" exitCode=0 Feb 19 10:51:12 crc kubenswrapper[4811]: I0219 10:51:12.719525 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2s9s7" event={"ID":"3b433a52-5266-4467-9b49-b398645688ce","Type":"ContainerDied","Data":"41ffb4134c532bfa7334db1b9759477ce410b8b1c98ae43544d00c69fec92f31"} Feb 19 10:51:13 crc kubenswrapper[4811]: I0219 10:51:13.388324 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x7wbs" Feb 19 10:51:13 crc kubenswrapper[4811]: I0219 10:51:13.388407 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x7wbs" Feb 19 10:51:13 crc kubenswrapper[4811]: I0219 10:51:13.468461 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x7wbs" Feb 19 10:51:13 crc kubenswrapper[4811]: I0219 10:51:13.741438 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2s9s7" event={"ID":"3b433a52-5266-4467-9b49-b398645688ce","Type":"ContainerStarted","Data":"e82996a64d45ca4cf6bcd5abe135886e652533f7d26a506b5d471c97c2780d82"} Feb 19 10:51:13 crc kubenswrapper[4811]: I0219 10:51:13.795340 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x7wbs" Feb 19 10:51:14 crc kubenswrapper[4811]: I0219 10:51:14.756879 4811 generic.go:334] "Generic (PLEG): container finished" podID="3b433a52-5266-4467-9b49-b398645688ce" containerID="e82996a64d45ca4cf6bcd5abe135886e652533f7d26a506b5d471c97c2780d82" exitCode=0 Feb 19 10:51:14 crc kubenswrapper[4811]: I0219 10:51:14.756970 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2s9s7" event={"ID":"3b433a52-5266-4467-9b49-b398645688ce","Type":"ContainerDied","Data":"e82996a64d45ca4cf6bcd5abe135886e652533f7d26a506b5d471c97c2780d82"} Feb 19 10:51:14 crc kubenswrapper[4811]: I0219 10:51:14.761673 4811 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 10:51:15 crc kubenswrapper[4811]: I0219 10:51:15.390273 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x7wbs"] Feb 19 10:51:15 crc kubenswrapper[4811]: I0219 10:51:15.770999 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2s9s7" event={"ID":"3b433a52-5266-4467-9b49-b398645688ce","Type":"ContainerStarted","Data":"081ce6c737c9ea4b13fe7ebf3186cbac2ac2f51ef3d3eecd6d59dd4aeade664a"} Feb 19 10:51:15 crc kubenswrapper[4811]: I0219 10:51:15.771211 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x7wbs" podUID="c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360" containerName="registry-server" containerID="cri-o://82350fb444d1f80d21e054e9a5371359f8b8d12a526b15da7a49045fdea6b97c" gracePeriod=2 Feb 19 10:51:15 crc kubenswrapper[4811]: I0219 10:51:15.801060 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2s9s7" podStartSLOduration=5.39044092 podStartE2EDuration="7.801026355s" podCreationTimestamp="2026-02-19 10:51:08 +0000 UTC" firstStartedPulling="2026-02-19 10:51:12.722655663 +0000 UTC m=+2561.249120349" lastFinishedPulling="2026-02-19 10:51:15.133241098 +0000 UTC m=+2563.659705784" observedRunningTime="2026-02-19 10:51:15.798133323 +0000 UTC m=+2564.324598009" watchObservedRunningTime="2026-02-19 10:51:15.801026355 +0000 UTC m=+2564.327491041" Feb 19 10:51:16 crc kubenswrapper[4811]: I0219 10:51:16.278319 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7wbs" Feb 19 10:51:16 crc kubenswrapper[4811]: I0219 10:51:16.346909 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360-catalog-content\") pod \"c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360\" (UID: \"c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360\") " Feb 19 10:51:16 crc kubenswrapper[4811]: I0219 10:51:16.347599 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360-utilities\") pod \"c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360\" (UID: \"c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360\") " Feb 19 10:51:16 crc kubenswrapper[4811]: I0219 10:51:16.347782 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpdr9\" (UniqueName: \"kubernetes.io/projected/c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360-kube-api-access-gpdr9\") pod \"c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360\" (UID: \"c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360\") " Feb 19 10:51:16 crc kubenswrapper[4811]: I0219 10:51:16.348276 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360-utilities" (OuterVolumeSpecName: "utilities") pod "c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360" (UID: "c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:51:16 crc kubenswrapper[4811]: I0219 10:51:16.349027 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:51:16 crc kubenswrapper[4811]: I0219 10:51:16.355955 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360-kube-api-access-gpdr9" (OuterVolumeSpecName: "kube-api-access-gpdr9") pod "c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360" (UID: "c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360"). InnerVolumeSpecName "kube-api-access-gpdr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:51:16 crc kubenswrapper[4811]: I0219 10:51:16.451441 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpdr9\" (UniqueName: \"kubernetes.io/projected/c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360-kube-api-access-gpdr9\") on node \"crc\" DevicePath \"\"" Feb 19 10:51:16 crc kubenswrapper[4811]: I0219 10:51:16.467118 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360" (UID: "c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:51:16 crc kubenswrapper[4811]: I0219 10:51:16.554347 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:51:16 crc kubenswrapper[4811]: I0219 10:51:16.784770 4811 generic.go:334] "Generic (PLEG): container finished" podID="c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360" containerID="82350fb444d1f80d21e054e9a5371359f8b8d12a526b15da7a49045fdea6b97c" exitCode=0 Feb 19 10:51:16 crc kubenswrapper[4811]: I0219 10:51:16.784829 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7wbs" Feb 19 10:51:16 crc kubenswrapper[4811]: I0219 10:51:16.784828 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7wbs" event={"ID":"c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360","Type":"ContainerDied","Data":"82350fb444d1f80d21e054e9a5371359f8b8d12a526b15da7a49045fdea6b97c"} Feb 19 10:51:16 crc kubenswrapper[4811]: I0219 10:51:16.784970 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7wbs" event={"ID":"c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360","Type":"ContainerDied","Data":"916373d2c2119cb0d428a30d58f7459c5a46e01e521630be31a1969aeae48953"} Feb 19 10:51:16 crc kubenswrapper[4811]: I0219 10:51:16.785000 4811 scope.go:117] "RemoveContainer" containerID="82350fb444d1f80d21e054e9a5371359f8b8d12a526b15da7a49045fdea6b97c" Feb 19 10:51:16 crc kubenswrapper[4811]: I0219 10:51:16.825180 4811 scope.go:117] "RemoveContainer" containerID="766e85575611ca47635837bb2fe729bcde5bb1a1d4be6dfbae11079ce5fc3b2d" Feb 19 10:51:16 crc kubenswrapper[4811]: I0219 10:51:16.831295 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x7wbs"] Feb 19 10:51:16 crc kubenswrapper[4811]: I0219 10:51:16.842316 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x7wbs"] Feb 19 10:51:16 crc kubenswrapper[4811]: I0219 10:51:16.858179 4811 scope.go:117] "RemoveContainer" containerID="f28678569180039bc8c4c72af06c4cded66bb0e1da5e7f147a859d30dbcf8c5c" Feb 19 10:51:16 crc kubenswrapper[4811]: I0219 10:51:16.915944 4811 scope.go:117] "RemoveContainer" containerID="82350fb444d1f80d21e054e9a5371359f8b8d12a526b15da7a49045fdea6b97c" Feb 19 10:51:16 crc kubenswrapper[4811]: E0219 10:51:16.916413 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82350fb444d1f80d21e054e9a5371359f8b8d12a526b15da7a49045fdea6b97c\": container with ID starting with 82350fb444d1f80d21e054e9a5371359f8b8d12a526b15da7a49045fdea6b97c not found: ID does not exist" containerID="82350fb444d1f80d21e054e9a5371359f8b8d12a526b15da7a49045fdea6b97c" Feb 19 10:51:16 crc kubenswrapper[4811]: I0219 10:51:16.916549 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82350fb444d1f80d21e054e9a5371359f8b8d12a526b15da7a49045fdea6b97c"} err="failed to get container status \"82350fb444d1f80d21e054e9a5371359f8b8d12a526b15da7a49045fdea6b97c\": rpc error: code = NotFound desc = could not find container \"82350fb444d1f80d21e054e9a5371359f8b8d12a526b15da7a49045fdea6b97c\": container with ID starting with 82350fb444d1f80d21e054e9a5371359f8b8d12a526b15da7a49045fdea6b97c not found: ID does not exist" Feb 19 10:51:16 crc kubenswrapper[4811]: I0219 10:51:16.916580 4811 scope.go:117] "RemoveContainer" containerID="766e85575611ca47635837bb2fe729bcde5bb1a1d4be6dfbae11079ce5fc3b2d" Feb 19 10:51:16 crc kubenswrapper[4811]: E0219 10:51:16.916885 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"766e85575611ca47635837bb2fe729bcde5bb1a1d4be6dfbae11079ce5fc3b2d\": container with ID starting with 766e85575611ca47635837bb2fe729bcde5bb1a1d4be6dfbae11079ce5fc3b2d not found: ID does not exist" containerID="766e85575611ca47635837bb2fe729bcde5bb1a1d4be6dfbae11079ce5fc3b2d" Feb 19 10:51:16 crc kubenswrapper[4811]: I0219 10:51:16.916913 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"766e85575611ca47635837bb2fe729bcde5bb1a1d4be6dfbae11079ce5fc3b2d"} err="failed to get container status \"766e85575611ca47635837bb2fe729bcde5bb1a1d4be6dfbae11079ce5fc3b2d\": rpc error: code = NotFound desc = could not find container \"766e85575611ca47635837bb2fe729bcde5bb1a1d4be6dfbae11079ce5fc3b2d\": container with ID starting with 766e85575611ca47635837bb2fe729bcde5bb1a1d4be6dfbae11079ce5fc3b2d not found: ID does not exist" Feb 19 10:51:16 crc kubenswrapper[4811]: I0219 10:51:16.916935 4811 scope.go:117] "RemoveContainer" containerID="f28678569180039bc8c4c72af06c4cded66bb0e1da5e7f147a859d30dbcf8c5c" Feb 19 10:51:16 crc kubenswrapper[4811]: E0219 10:51:16.917885 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f28678569180039bc8c4c72af06c4cded66bb0e1da5e7f147a859d30dbcf8c5c\": container with ID starting with f28678569180039bc8c4c72af06c4cded66bb0e1da5e7f147a859d30dbcf8c5c not found: ID does not exist" containerID="f28678569180039bc8c4c72af06c4cded66bb0e1da5e7f147a859d30dbcf8c5c" Feb 19 10:51:16 crc kubenswrapper[4811]: I0219 10:51:16.917914 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f28678569180039bc8c4c72af06c4cded66bb0e1da5e7f147a859d30dbcf8c5c"} err="failed to get container status \"f28678569180039bc8c4c72af06c4cded66bb0e1da5e7f147a859d30dbcf8c5c\": rpc error: code = NotFound desc = could not find container \"f28678569180039bc8c4c72af06c4cded66bb0e1da5e7f147a859d30dbcf8c5c\": container with ID starting with f28678569180039bc8c4c72af06c4cded66bb0e1da5e7f147a859d30dbcf8c5c not found: ID does not exist" Feb 19 10:51:18 crc kubenswrapper[4811]: I0219 10:51:18.595309 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360" path="/var/lib/kubelet/pods/c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360/volumes" Feb 19 10:51:19 crc kubenswrapper[4811]: I0219 10:51:19.331666 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2s9s7" Feb 19 10:51:19 crc kubenswrapper[4811]: I0219 10:51:19.331746 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2s9s7" Feb 19 10:51:19 crc kubenswrapper[4811]: I0219 10:51:19.389056 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2s9s7" Feb 19 10:51:22 crc kubenswrapper[4811]: I0219 10:51:22.595701 4811 scope.go:117] "RemoveContainer" containerID="d39fec6eee1a4776c6f310ef7d9dd213a092ee3da120aad38ee64de4b6939d2d" Feb 19 10:51:22 crc kubenswrapper[4811]: E0219 10:51:22.596542 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:51:27 crc kubenswrapper[4811]: I0219 10:51:27.932639 4811 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-666bdb77b9-x6mrc" podUID="0beeae9e-3b17-4691-8273-95fbc4be85fb" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 19 10:51:29 crc kubenswrapper[4811]: I0219 10:51:29.383092 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2s9s7" Feb 19 10:51:29 crc kubenswrapper[4811]: I0219 10:51:29.441250 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2s9s7"] Feb 19 10:51:29 crc kubenswrapper[4811]: I0219 10:51:29.920295 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2s9s7" podUID="3b433a52-5266-4467-9b49-b398645688ce" containerName="registry-server" containerID="cri-o://081ce6c737c9ea4b13fe7ebf3186cbac2ac2f51ef3d3eecd6d59dd4aeade664a" gracePeriod=2 Feb 19 10:51:30 crc kubenswrapper[4811]: I0219 10:51:30.418472 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2s9s7" Feb 19 10:51:30 crc kubenswrapper[4811]: I0219 10:51:30.515115 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vqtt\" (UniqueName: \"kubernetes.io/projected/3b433a52-5266-4467-9b49-b398645688ce-kube-api-access-7vqtt\") pod \"3b433a52-5266-4467-9b49-b398645688ce\" (UID: \"3b433a52-5266-4467-9b49-b398645688ce\") " Feb 19 10:51:30 crc kubenswrapper[4811]: I0219 10:51:30.515258 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b433a52-5266-4467-9b49-b398645688ce-utilities\") pod \"3b433a52-5266-4467-9b49-b398645688ce\" (UID: \"3b433a52-5266-4467-9b49-b398645688ce\") " Feb 19 10:51:30 crc kubenswrapper[4811]: I0219 10:51:30.515313 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b433a52-5266-4467-9b49-b398645688ce-catalog-content\") pod \"3b433a52-5266-4467-9b49-b398645688ce\" (UID: \"3b433a52-5266-4467-9b49-b398645688ce\") " Feb 19 10:51:30 crc kubenswrapper[4811]: I0219 10:51:30.517469 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b433a52-5266-4467-9b49-b398645688ce-utilities" (OuterVolumeSpecName: "utilities") pod "3b433a52-5266-4467-9b49-b398645688ce" (UID: "3b433a52-5266-4467-9b49-b398645688ce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:51:30 crc kubenswrapper[4811]: I0219 10:51:30.525963 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b433a52-5266-4467-9b49-b398645688ce-kube-api-access-7vqtt" (OuterVolumeSpecName: "kube-api-access-7vqtt") pod "3b433a52-5266-4467-9b49-b398645688ce" (UID: "3b433a52-5266-4467-9b49-b398645688ce"). InnerVolumeSpecName "kube-api-access-7vqtt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:51:30 crc kubenswrapper[4811]: I0219 10:51:30.578890 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b433a52-5266-4467-9b49-b398645688ce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b433a52-5266-4467-9b49-b398645688ce" (UID: "3b433a52-5266-4467-9b49-b398645688ce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:51:30 crc kubenswrapper[4811]: I0219 10:51:30.618444 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vqtt\" (UniqueName: \"kubernetes.io/projected/3b433a52-5266-4467-9b49-b398645688ce-kube-api-access-7vqtt\") on node \"crc\" DevicePath \"\"" Feb 19 10:51:30 crc kubenswrapper[4811]: I0219 10:51:30.618513 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b433a52-5266-4467-9b49-b398645688ce-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:51:30 crc kubenswrapper[4811]: I0219 10:51:30.618530 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b433a52-5266-4467-9b49-b398645688ce-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:51:30 crc kubenswrapper[4811]: I0219 10:51:30.932825 4811 generic.go:334] "Generic (PLEG): container finished" podID="3b433a52-5266-4467-9b49-b398645688ce" containerID="081ce6c737c9ea4b13fe7ebf3186cbac2ac2f51ef3d3eecd6d59dd4aeade664a" exitCode=0 Feb 19 10:51:30 crc kubenswrapper[4811]: I0219 10:51:30.932888 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2s9s7" event={"ID":"3b433a52-5266-4467-9b49-b398645688ce","Type":"ContainerDied","Data":"081ce6c737c9ea4b13fe7ebf3186cbac2ac2f51ef3d3eecd6d59dd4aeade664a"} Feb 19 10:51:30 crc kubenswrapper[4811]: I0219 10:51:30.932913 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2s9s7" Feb 19 10:51:30 crc kubenswrapper[4811]: I0219 10:51:30.932957 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2s9s7" event={"ID":"3b433a52-5266-4467-9b49-b398645688ce","Type":"ContainerDied","Data":"558bbd99002a07279908f38e147f0e88f4229ca1bd0d346905e1c4e08742ec9f"} Feb 19 10:51:30 crc kubenswrapper[4811]: I0219 10:51:30.932993 4811 scope.go:117] "RemoveContainer" containerID="081ce6c737c9ea4b13fe7ebf3186cbac2ac2f51ef3d3eecd6d59dd4aeade664a" Feb 19 10:51:30 crc kubenswrapper[4811]: I0219 10:51:30.963124 4811 scope.go:117] "RemoveContainer" containerID="e82996a64d45ca4cf6bcd5abe135886e652533f7d26a506b5d471c97c2780d82" Feb 19 10:51:30 crc kubenswrapper[4811]: I0219 10:51:30.964973 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2s9s7"] Feb 19 10:51:30 crc kubenswrapper[4811]: I0219 10:51:30.987107 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2s9s7"] Feb 19 10:51:30 crc kubenswrapper[4811]: I0219 10:51:30.990316 4811 scope.go:117] "RemoveContainer" containerID="41ffb4134c532bfa7334db1b9759477ce410b8b1c98ae43544d00c69fec92f31" Feb 19 10:51:31 crc kubenswrapper[4811]: I0219 10:51:31.044706 4811 scope.go:117] "RemoveContainer" containerID="081ce6c737c9ea4b13fe7ebf3186cbac2ac2f51ef3d3eecd6d59dd4aeade664a" Feb 19 10:51:31 crc kubenswrapper[4811]: E0219 10:51:31.045298 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"081ce6c737c9ea4b13fe7ebf3186cbac2ac2f51ef3d3eecd6d59dd4aeade664a\": container with ID starting with 081ce6c737c9ea4b13fe7ebf3186cbac2ac2f51ef3d3eecd6d59dd4aeade664a not found: ID does not exist" containerID="081ce6c737c9ea4b13fe7ebf3186cbac2ac2f51ef3d3eecd6d59dd4aeade664a" Feb 19 10:51:31 crc kubenswrapper[4811]: I0219 10:51:31.045427 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"081ce6c737c9ea4b13fe7ebf3186cbac2ac2f51ef3d3eecd6d59dd4aeade664a"} err="failed to get container status \"081ce6c737c9ea4b13fe7ebf3186cbac2ac2f51ef3d3eecd6d59dd4aeade664a\": rpc error: code = NotFound desc = could not find container \"081ce6c737c9ea4b13fe7ebf3186cbac2ac2f51ef3d3eecd6d59dd4aeade664a\": container with ID starting with 081ce6c737c9ea4b13fe7ebf3186cbac2ac2f51ef3d3eecd6d59dd4aeade664a not found: ID does not exist" Feb 19 10:51:31 crc kubenswrapper[4811]: I0219 10:51:31.045564 4811 scope.go:117] "RemoveContainer" containerID="e82996a64d45ca4cf6bcd5abe135886e652533f7d26a506b5d471c97c2780d82" Feb 19 10:51:31 crc kubenswrapper[4811]: E0219 10:51:31.046097 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e82996a64d45ca4cf6bcd5abe135886e652533f7d26a506b5d471c97c2780d82\": container with ID starting with e82996a64d45ca4cf6bcd5abe135886e652533f7d26a506b5d471c97c2780d82 not found: ID does not exist" containerID="e82996a64d45ca4cf6bcd5abe135886e652533f7d26a506b5d471c97c2780d82" Feb 19 10:51:31 crc kubenswrapper[4811]: I0219 10:51:31.046135 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e82996a64d45ca4cf6bcd5abe135886e652533f7d26a506b5d471c97c2780d82"} err="failed to get container status \"e82996a64d45ca4cf6bcd5abe135886e652533f7d26a506b5d471c97c2780d82\": rpc error: code = NotFound desc = could not find container \"e82996a64d45ca4cf6bcd5abe135886e652533f7d26a506b5d471c97c2780d82\": container with ID starting with e82996a64d45ca4cf6bcd5abe135886e652533f7d26a506b5d471c97c2780d82 not found: ID does not exist" Feb 19 10:51:31 crc kubenswrapper[4811]: I0219 10:51:31.046164 4811 scope.go:117] "RemoveContainer" containerID="41ffb4134c532bfa7334db1b9759477ce410b8b1c98ae43544d00c69fec92f31" Feb 19 10:51:31 crc kubenswrapper[4811]: E0219 10:51:31.046464 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41ffb4134c532bfa7334db1b9759477ce410b8b1c98ae43544d00c69fec92f31\": container with ID starting with 41ffb4134c532bfa7334db1b9759477ce410b8b1c98ae43544d00c69fec92f31 not found: ID does not exist" containerID="41ffb4134c532bfa7334db1b9759477ce410b8b1c98ae43544d00c69fec92f31" Feb 19 10:51:31 crc kubenswrapper[4811]: I0219 10:51:31.046562 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41ffb4134c532bfa7334db1b9759477ce410b8b1c98ae43544d00c69fec92f31"} err="failed to get container status \"41ffb4134c532bfa7334db1b9759477ce410b8b1c98ae43544d00c69fec92f31\": rpc error: code = NotFound desc = could not find container \"41ffb4134c532bfa7334db1b9759477ce410b8b1c98ae43544d00c69fec92f31\": container with ID starting with 41ffb4134c532bfa7334db1b9759477ce410b8b1c98ae43544d00c69fec92f31 not found: ID does not exist" Feb 19 10:51:32 crc kubenswrapper[4811]: I0219 10:51:32.596371 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b433a52-5266-4467-9b49-b398645688ce" path="/var/lib/kubelet/pods/3b433a52-5266-4467-9b49-b398645688ce/volumes" Feb 19 10:51:36 crc kubenswrapper[4811]: I0219 10:51:36.584237 4811 scope.go:117] "RemoveContainer" containerID="d39fec6eee1a4776c6f310ef7d9dd213a092ee3da120aad38ee64de4b6939d2d" Feb 19 10:51:36 crc kubenswrapper[4811]: E0219 10:51:36.585220 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:51:49 crc kubenswrapper[4811]: I0219 10:51:49.158581 4811 generic.go:334] "Generic (PLEG): container finished" podID="2017b43a-f947-4d28-97b7-5cf3d5d0cc1b" containerID="c10f6464d73f15fdfff3669c95357ef4b74a2d66cf7824366d3c0f71ceec42c8" exitCode=0 Feb 19 10:51:49 crc kubenswrapper[4811]: I0219 10:51:49.158671 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx" event={"ID":"2017b43a-f947-4d28-97b7-5cf3d5d0cc1b","Type":"ContainerDied","Data":"c10f6464d73f15fdfff3669c95357ef4b74a2d66cf7824366d3c0f71ceec42c8"} Feb 19 10:51:50 crc kubenswrapper[4811]: I0219 10:51:50.589147 4811 scope.go:117] "RemoveContainer" containerID="d39fec6eee1a4776c6f310ef7d9dd213a092ee3da120aad38ee64de4b6939d2d" Feb 19 10:51:50 crc kubenswrapper[4811]: E0219 10:51:50.589875 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:51:50 crc kubenswrapper[4811]: I0219 10:51:50.641597 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx" Feb 19 10:51:50 crc kubenswrapper[4811]: I0219 10:51:50.799708 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-ceilometer-compute-config-data-0\") pod \"2017b43a-f947-4d28-97b7-5cf3d5d0cc1b\" (UID: \"2017b43a-f947-4d28-97b7-5cf3d5d0cc1b\") " Feb 19 10:51:50 crc kubenswrapper[4811]: I0219 10:51:50.800203 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2p5n\" (UniqueName: \"kubernetes.io/projected/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-kube-api-access-c2p5n\") pod \"2017b43a-f947-4d28-97b7-5cf3d5d0cc1b\" (UID: \"2017b43a-f947-4d28-97b7-5cf3d5d0cc1b\") " Feb 19 10:51:50 crc kubenswrapper[4811]: I0219 10:51:50.800239 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-telemetry-combined-ca-bundle\") pod \"2017b43a-f947-4d28-97b7-5cf3d5d0cc1b\" (UID: \"2017b43a-f947-4d28-97b7-5cf3d5d0cc1b\") " Feb 19 10:51:50 crc kubenswrapper[4811]: I0219 10:51:50.800566 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-ceilometer-compute-config-data-1\") pod \"2017b43a-f947-4d28-97b7-5cf3d5d0cc1b\" (UID: \"2017b43a-f947-4d28-97b7-5cf3d5d0cc1b\") " Feb 19 10:51:50 crc kubenswrapper[4811]: I0219 10:51:50.800643 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-ssh-key-openstack-edpm-ipam\") pod \"2017b43a-f947-4d28-97b7-5cf3d5d0cc1b\" (UID: \"2017b43a-f947-4d28-97b7-5cf3d5d0cc1b\") " Feb 19 10:51:50 crc kubenswrapper[4811]: I0219 10:51:50.800800 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-ceilometer-compute-config-data-2\") pod \"2017b43a-f947-4d28-97b7-5cf3d5d0cc1b\" (UID: \"2017b43a-f947-4d28-97b7-5cf3d5d0cc1b\") " Feb 19 10:51:50 crc kubenswrapper[4811]: I0219 10:51:50.800858 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-inventory\") pod \"2017b43a-f947-4d28-97b7-5cf3d5d0cc1b\" (UID: \"2017b43a-f947-4d28-97b7-5cf3d5d0cc1b\") " Feb 19 10:51:50 crc kubenswrapper[4811]: I0219 10:51:50.806929 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-kube-api-access-c2p5n" (OuterVolumeSpecName: "kube-api-access-c2p5n") pod "2017b43a-f947-4d28-97b7-5cf3d5d0cc1b" (UID: "2017b43a-f947-4d28-97b7-5cf3d5d0cc1b"). InnerVolumeSpecName "kube-api-access-c2p5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:51:50 crc kubenswrapper[4811]: I0219 10:51:50.808796 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "2017b43a-f947-4d28-97b7-5cf3d5d0cc1b" (UID: "2017b43a-f947-4d28-97b7-5cf3d5d0cc1b"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:51:50 crc kubenswrapper[4811]: I0219 10:51:50.834895 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "2017b43a-f947-4d28-97b7-5cf3d5d0cc1b" (UID: "2017b43a-f947-4d28-97b7-5cf3d5d0cc1b"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:51:50 crc kubenswrapper[4811]: I0219 10:51:50.836841 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "2017b43a-f947-4d28-97b7-5cf3d5d0cc1b" (UID: "2017b43a-f947-4d28-97b7-5cf3d5d0cc1b"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:51:50 crc kubenswrapper[4811]: I0219 10:51:50.838006 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2017b43a-f947-4d28-97b7-5cf3d5d0cc1b" (UID: "2017b43a-f947-4d28-97b7-5cf3d5d0cc1b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:51:50 crc kubenswrapper[4811]: I0219 10:51:50.838706 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "2017b43a-f947-4d28-97b7-5cf3d5d0cc1b" (UID: "2017b43a-f947-4d28-97b7-5cf3d5d0cc1b"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:51:50 crc kubenswrapper[4811]: I0219 10:51:50.847149 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-inventory" (OuterVolumeSpecName: "inventory") pod "2017b43a-f947-4d28-97b7-5cf3d5d0cc1b" (UID: "2017b43a-f947-4d28-97b7-5cf3d5d0cc1b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 10:51:50 crc kubenswrapper[4811]: I0219 10:51:50.904659 4811 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 19 10:51:50 crc kubenswrapper[4811]: I0219 10:51:50.904693 4811 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 10:51:50 crc kubenswrapper[4811]: I0219 10:51:50.904706 4811 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 19 10:51:50 crc kubenswrapper[4811]: I0219 10:51:50.904716 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2p5n\" (UniqueName: \"kubernetes.io/projected/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-kube-api-access-c2p5n\") on node \"crc\" DevicePath \"\"" Feb 19 10:51:50 crc kubenswrapper[4811]: I0219 10:51:50.904744 4811 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 10:51:50 crc kubenswrapper[4811]: I0219 10:51:50.904754 4811 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 19 10:51:50 crc kubenswrapper[4811]: I0219 10:51:50.904767 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2017b43a-f947-4d28-97b7-5cf3d5d0cc1b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 10:51:51 crc kubenswrapper[4811]: I0219 10:51:51.182515 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx" event={"ID":"2017b43a-f947-4d28-97b7-5cf3d5d0cc1b","Type":"ContainerDied","Data":"478c171eeda4b78895e392990f5d7de0d491f45279fea65f78bcef361daa8403"} Feb 19 10:51:51 crc kubenswrapper[4811]: I0219 10:51:51.182632 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="478c171eeda4b78895e392990f5d7de0d491f45279fea65f78bcef361daa8403" Feb 19 10:51:51 crc kubenswrapper[4811]: I0219 10:51:51.182553 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx" Feb 19 10:51:54 crc kubenswrapper[4811]: I0219 10:51:54.250373 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lxkwp"] Feb 19 10:51:54 crc kubenswrapper[4811]: E0219 10:51:54.251627 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2017b43a-f947-4d28-97b7-5cf3d5d0cc1b" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 19 10:51:54 crc kubenswrapper[4811]: I0219 10:51:54.251647 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="2017b43a-f947-4d28-97b7-5cf3d5d0cc1b" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 19 10:51:54 crc kubenswrapper[4811]: E0219 10:51:54.251710 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360" containerName="registry-server" Feb 19 10:51:54 crc kubenswrapper[4811]: I0219 10:51:54.251721 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360" containerName="registry-server" Feb 19 10:51:54 crc kubenswrapper[4811]: E0219 10:51:54.251744 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b433a52-5266-4467-9b49-b398645688ce" containerName="registry-server" Feb 19 10:51:54 crc kubenswrapper[4811]: I0219 10:51:54.251781 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b433a52-5266-4467-9b49-b398645688ce" containerName="registry-server" Feb 19 10:51:54 crc kubenswrapper[4811]: E0219 10:51:54.251839 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360" containerName="extract-content" Feb 19 10:51:54 crc kubenswrapper[4811]: I0219 10:51:54.251851 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360" containerName="extract-content" Feb 19 10:51:54 crc kubenswrapper[4811]: E0219 10:51:54.251865 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b433a52-5266-4467-9b49-b398645688ce" containerName="extract-utilities" Feb 19 10:51:54 crc kubenswrapper[4811]: I0219 10:51:54.251873 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b433a52-5266-4467-9b49-b398645688ce" containerName="extract-utilities" Feb 19 10:51:54 crc kubenswrapper[4811]: E0219 10:51:54.251890 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b433a52-5266-4467-9b49-b398645688ce" containerName="extract-content" Feb 19 10:51:54 crc kubenswrapper[4811]: I0219 10:51:54.251929 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b433a52-5266-4467-9b49-b398645688ce" containerName="extract-content" Feb 19 10:51:54 crc kubenswrapper[4811]: E0219 10:51:54.251949 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360" containerName="extract-utilities" Feb 19 10:51:54 crc kubenswrapper[4811]: I0219 10:51:54.251959 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360" containerName="extract-utilities" Feb 19 10:51:54 crc kubenswrapper[4811]: I0219 10:51:54.252347 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="c01ccc42-6767-4ce9-8ac7-c6fc2aeaa360" containerName="registry-server" Feb 19 10:51:54 crc kubenswrapper[4811]: I0219 10:51:54.252378 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="2017b43a-f947-4d28-97b7-5cf3d5d0cc1b" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 19 10:51:54 crc kubenswrapper[4811]: I0219 10:51:54.252411 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b433a52-5266-4467-9b49-b398645688ce" containerName="registry-server" Feb 19 10:51:54 crc kubenswrapper[4811]: I0219 10:51:54.254924 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxkwp" Feb 19 10:51:54 crc kubenswrapper[4811]: I0219 10:51:54.271531 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxkwp"] Feb 19 10:51:54 crc kubenswrapper[4811]: I0219 10:51:54.391326 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r7j7\" (UniqueName: \"kubernetes.io/projected/0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a-kube-api-access-4r7j7\") pod \"redhat-marketplace-lxkwp\" (UID: \"0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a\") " pod="openshift-marketplace/redhat-marketplace-lxkwp" Feb 19 10:51:54 crc kubenswrapper[4811]: I0219 10:51:54.391816 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a-catalog-content\") pod \"redhat-marketplace-lxkwp\" (UID: \"0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a\") " pod="openshift-marketplace/redhat-marketplace-lxkwp" Feb 19 10:51:54 crc kubenswrapper[4811]: I0219 10:51:54.391931 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a-utilities\") pod \"redhat-marketplace-lxkwp\" (UID: \"0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a\") " pod="openshift-marketplace/redhat-marketplace-lxkwp" Feb 19 10:51:54 crc kubenswrapper[4811]: I0219 10:51:54.494162 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r7j7\" (UniqueName: \"kubernetes.io/projected/0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a-kube-api-access-4r7j7\") pod \"redhat-marketplace-lxkwp\" (UID: \"0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a\") " pod="openshift-marketplace/redhat-marketplace-lxkwp" Feb 19 10:51:54 crc kubenswrapper[4811]: I0219 10:51:54.494490 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a-catalog-content\") pod \"redhat-marketplace-lxkwp\" (UID: \"0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a\") " pod="openshift-marketplace/redhat-marketplace-lxkwp" Feb 19 10:51:54 crc kubenswrapper[4811]: I0219 10:51:54.494678 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a-utilities\") pod \"redhat-marketplace-lxkwp\" (UID: \"0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a\") " pod="openshift-marketplace/redhat-marketplace-lxkwp" Feb 19 10:51:54 crc kubenswrapper[4811]: I0219 10:51:54.495444 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a-utilities\") pod \"redhat-marketplace-lxkwp\" (UID: \"0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a\") " pod="openshift-marketplace/redhat-marketplace-lxkwp" Feb 19 10:51:54 crc kubenswrapper[4811]: I0219 10:51:54.495462 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a-catalog-content\") pod \"redhat-marketplace-lxkwp\" (UID: \"0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a\") " pod="openshift-marketplace/redhat-marketplace-lxkwp" Feb 19 10:51:54 crc kubenswrapper[4811]: I0219 10:51:54.520924 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r7j7\" (UniqueName: \"kubernetes.io/projected/0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a-kube-api-access-4r7j7\") pod \"redhat-marketplace-lxkwp\" (UID: \"0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a\") " pod="openshift-marketplace/redhat-marketplace-lxkwp" Feb 19 10:51:54 crc kubenswrapper[4811]: I0219 10:51:54.585209 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxkwp" Feb 19 10:51:55 crc kubenswrapper[4811]: I0219 10:51:55.112593 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxkwp"] Feb 19 10:51:55 crc kubenswrapper[4811]: I0219 10:51:55.250256 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxkwp" event={"ID":"0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a","Type":"ContainerStarted","Data":"b8d90929d43fe45127e202c6ca286b7e3f9f1a30789f49a659d0cb440953b1fc"} Feb 19 10:51:56 crc kubenswrapper[4811]: I0219 10:51:56.267883 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxkwp" event={"ID":"0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a","Type":"ContainerDied","Data":"7ae5a80371673d96b18c9fc07e119e6e2a49e7cd8fe5db4a955ca30ba422bab4"} Feb 19 10:51:56 crc kubenswrapper[4811]: I0219 10:51:56.267686 4811 generic.go:334] "Generic (PLEG): container finished" podID="0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a" containerID="7ae5a80371673d96b18c9fc07e119e6e2a49e7cd8fe5db4a955ca30ba422bab4" exitCode=0 Feb 19 10:51:57 crc kubenswrapper[4811]: I0219 10:51:57.291536 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxkwp" event={"ID":"0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a","Type":"ContainerStarted","Data":"2d7eed22fd347d3fc123094f8d95190ae7fa82164306b3060b40c32882a87e33"} Feb 19 10:51:58 crc kubenswrapper[4811]: I0219 10:51:58.304914 4811 generic.go:334] "Generic (PLEG): container finished" podID="0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a" containerID="2d7eed22fd347d3fc123094f8d95190ae7fa82164306b3060b40c32882a87e33" exitCode=0 Feb 19 10:51:58 crc kubenswrapper[4811]: I0219 10:51:58.304976 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxkwp" event={"ID":"0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a","Type":"ContainerDied","Data":"2d7eed22fd347d3fc123094f8d95190ae7fa82164306b3060b40c32882a87e33"} Feb 19 10:51:59 crc kubenswrapper[4811]: I0219 10:51:59.319432 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxkwp" event={"ID":"0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a","Type":"ContainerStarted","Data":"75cbd68ef91f232841cbb1f71d77cb95ad206cf85adf169746c634cf088e0350"} Feb 19 10:51:59 crc kubenswrapper[4811]: I0219 10:51:59.344948 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lxkwp" podStartSLOduration=2.921137915 podStartE2EDuration="5.344928994s" podCreationTimestamp="2026-02-19 10:51:54 +0000 UTC" firstStartedPulling="2026-02-19 10:51:56.270443631 +0000 UTC m=+2604.796908317" lastFinishedPulling="2026-02-19 10:51:58.69423471 +0000 UTC m=+2607.220699396" observedRunningTime="2026-02-19 10:51:59.338156506 +0000 UTC m=+2607.864621202" watchObservedRunningTime="2026-02-19 10:51:59.344928994 +0000 UTC m=+2607.871393680" Feb 19 10:52:03 crc kubenswrapper[4811]: I0219 10:52:03.583619 4811 scope.go:117] "RemoveContainer" containerID="d39fec6eee1a4776c6f310ef7d9dd213a092ee3da120aad38ee64de4b6939d2d" Feb 19 10:52:03 crc kubenswrapper[4811]: E0219 10:52:03.586294 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:52:04 crc kubenswrapper[4811]: I0219 10:52:04.597945 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lxkwp" Feb 19 10:52:04 crc kubenswrapper[4811]: I0219 10:52:04.598362 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lxkwp" Feb 19 10:52:04 crc kubenswrapper[4811]: I0219 10:52:04.644017 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lxkwp" Feb 19 10:52:05 crc kubenswrapper[4811]: I0219 10:52:05.425776 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lxkwp" Feb 19 10:52:05 crc kubenswrapper[4811]: I0219 10:52:05.501336 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxkwp"] Feb 19 10:52:07 crc kubenswrapper[4811]: I0219 10:52:07.396761 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lxkwp" podUID="0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a" containerName="registry-server" containerID="cri-o://75cbd68ef91f232841cbb1f71d77cb95ad206cf85adf169746c634cf088e0350" gracePeriod=2 Feb 19 10:52:07 crc kubenswrapper[4811]: I0219 10:52:07.979670 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxkwp" Feb 19 10:52:08 crc kubenswrapper[4811]: I0219 10:52:08.121557 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a-utilities\") pod \"0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a\" (UID: \"0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a\") " Feb 19 10:52:08 crc kubenswrapper[4811]: I0219 10:52:08.121718 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4r7j7\" (UniqueName: \"kubernetes.io/projected/0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a-kube-api-access-4r7j7\") pod \"0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a\" (UID: \"0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a\") " Feb 19 10:52:08 crc kubenswrapper[4811]: I0219 10:52:08.121898 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a-catalog-content\") pod \"0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a\" (UID: \"0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a\") " Feb 19 10:52:08 crc kubenswrapper[4811]: I0219 10:52:08.123934 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a-utilities" (OuterVolumeSpecName: "utilities") pod "0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a" (UID: "0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:52:08 crc kubenswrapper[4811]: I0219 10:52:08.134857 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a-kube-api-access-4r7j7" (OuterVolumeSpecName: "kube-api-access-4r7j7") pod "0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a" (UID: "0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a"). InnerVolumeSpecName "kube-api-access-4r7j7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:52:08 crc kubenswrapper[4811]: I0219 10:52:08.224938 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:52:08 crc kubenswrapper[4811]: I0219 10:52:08.225035 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4r7j7\" (UniqueName: \"kubernetes.io/projected/0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a-kube-api-access-4r7j7\") on node \"crc\" DevicePath \"\"" Feb 19 10:52:08 crc kubenswrapper[4811]: I0219 10:52:08.408368 4811 generic.go:334] "Generic (PLEG): container finished" podID="0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a" containerID="75cbd68ef91f232841cbb1f71d77cb95ad206cf85adf169746c634cf088e0350" exitCode=0 Feb 19 10:52:08 crc kubenswrapper[4811]: I0219 10:52:08.408435 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxkwp" event={"ID":"0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a","Type":"ContainerDied","Data":"75cbd68ef91f232841cbb1f71d77cb95ad206cf85adf169746c634cf088e0350"} Feb 19 10:52:08 crc kubenswrapper[4811]: I0219 10:52:08.408528 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxkwp" event={"ID":"0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a","Type":"ContainerDied","Data":"b8d90929d43fe45127e202c6ca286b7e3f9f1a30789f49a659d0cb440953b1fc"} Feb 19 10:52:08 crc kubenswrapper[4811]: I0219 10:52:08.408561 4811 scope.go:117] "RemoveContainer" containerID="75cbd68ef91f232841cbb1f71d77cb95ad206cf85adf169746c634cf088e0350" Feb 19 10:52:08 crc kubenswrapper[4811]: I0219 10:52:08.408650 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxkwp" Feb 19 10:52:08 crc kubenswrapper[4811]: I0219 10:52:08.434969 4811 scope.go:117] "RemoveContainer" containerID="2d7eed22fd347d3fc123094f8d95190ae7fa82164306b3060b40c32882a87e33" Feb 19 10:52:08 crc kubenswrapper[4811]: I0219 10:52:08.461978 4811 scope.go:117] "RemoveContainer" containerID="7ae5a80371673d96b18c9fc07e119e6e2a49e7cd8fe5db4a955ca30ba422bab4" Feb 19 10:52:08 crc kubenswrapper[4811]: I0219 10:52:08.501595 4811 scope.go:117] "RemoveContainer" containerID="75cbd68ef91f232841cbb1f71d77cb95ad206cf85adf169746c634cf088e0350" Feb 19 10:52:08 crc kubenswrapper[4811]: E0219 10:52:08.502019 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75cbd68ef91f232841cbb1f71d77cb95ad206cf85adf169746c634cf088e0350\": container with ID starting with 75cbd68ef91f232841cbb1f71d77cb95ad206cf85adf169746c634cf088e0350 not found: ID does not exist" containerID="75cbd68ef91f232841cbb1f71d77cb95ad206cf85adf169746c634cf088e0350" Feb 19 10:52:08 crc kubenswrapper[4811]: I0219 10:52:08.502057 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75cbd68ef91f232841cbb1f71d77cb95ad206cf85adf169746c634cf088e0350"} err="failed to get container status \"75cbd68ef91f232841cbb1f71d77cb95ad206cf85adf169746c634cf088e0350\": rpc error: code = NotFound desc = could not find container \"75cbd68ef91f232841cbb1f71d77cb95ad206cf85adf169746c634cf088e0350\": container with ID starting with 75cbd68ef91f232841cbb1f71d77cb95ad206cf85adf169746c634cf088e0350 not found: ID does not exist" Feb 19 10:52:08 crc kubenswrapper[4811]: I0219 10:52:08.502080 4811 scope.go:117] "RemoveContainer" containerID="2d7eed22fd347d3fc123094f8d95190ae7fa82164306b3060b40c32882a87e33" Feb 19 10:52:08 crc kubenswrapper[4811]: E0219 10:52:08.502392 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d7eed22fd347d3fc123094f8d95190ae7fa82164306b3060b40c32882a87e33\": container with ID starting with 2d7eed22fd347d3fc123094f8d95190ae7fa82164306b3060b40c32882a87e33 not found: ID does not exist" containerID="2d7eed22fd347d3fc123094f8d95190ae7fa82164306b3060b40c32882a87e33" Feb 19 10:52:08 crc kubenswrapper[4811]: I0219 10:52:08.502527 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d7eed22fd347d3fc123094f8d95190ae7fa82164306b3060b40c32882a87e33"} err="failed to get container status \"2d7eed22fd347d3fc123094f8d95190ae7fa82164306b3060b40c32882a87e33\": rpc error: code = NotFound desc = could not find container \"2d7eed22fd347d3fc123094f8d95190ae7fa82164306b3060b40c32882a87e33\": container with ID starting with 2d7eed22fd347d3fc123094f8d95190ae7fa82164306b3060b40c32882a87e33 not found: ID does not exist" Feb 19 10:52:08 crc kubenswrapper[4811]: I0219 10:52:08.502602 4811 scope.go:117] "RemoveContainer" containerID="7ae5a80371673d96b18c9fc07e119e6e2a49e7cd8fe5db4a955ca30ba422bab4" Feb 19 10:52:08 crc kubenswrapper[4811]: E0219 10:52:08.502909 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ae5a80371673d96b18c9fc07e119e6e2a49e7cd8fe5db4a955ca30ba422bab4\": container with ID starting with 7ae5a80371673d96b18c9fc07e119e6e2a49e7cd8fe5db4a955ca30ba422bab4 not found: ID does not exist" containerID="7ae5a80371673d96b18c9fc07e119e6e2a49e7cd8fe5db4a955ca30ba422bab4" Feb 19 10:52:08 crc kubenswrapper[4811]: I0219 10:52:08.502941 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ae5a80371673d96b18c9fc07e119e6e2a49e7cd8fe5db4a955ca30ba422bab4"} err="failed to get container status \"7ae5a80371673d96b18c9fc07e119e6e2a49e7cd8fe5db4a955ca30ba422bab4\": rpc error: code = NotFound desc = could not find container \"7ae5a80371673d96b18c9fc07e119e6e2a49e7cd8fe5db4a955ca30ba422bab4\": container with ID starting with 7ae5a80371673d96b18c9fc07e119e6e2a49e7cd8fe5db4a955ca30ba422bab4 not found: ID does not exist" Feb 19 10:52:08 crc kubenswrapper[4811]: I0219 10:52:08.533702 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a" (UID: "0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:52:08 crc kubenswrapper[4811]: I0219 10:52:08.536521 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:52:08 crc kubenswrapper[4811]: I0219 10:52:08.736529 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxkwp"] Feb 19 10:52:08 crc kubenswrapper[4811]: I0219 10:52:08.745147 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxkwp"] Feb 19 10:52:10 crc kubenswrapper[4811]: I0219 10:52:10.606109 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a" path="/var/lib/kubelet/pods/0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a/volumes" Feb 19 10:52:14 crc kubenswrapper[4811]: I0219 10:52:14.589245 4811 scope.go:117] "RemoveContainer" containerID="d39fec6eee1a4776c6f310ef7d9dd213a092ee3da120aad38ee64de4b6939d2d" Feb 19 10:52:14 crc kubenswrapper[4811]: E0219 10:52:14.590468 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:52:28 crc kubenswrapper[4811]: I0219 10:52:28.583650 4811 scope.go:117] "RemoveContainer" containerID="d39fec6eee1a4776c6f310ef7d9dd213a092ee3da120aad38ee64de4b6939d2d" Feb 19 10:52:28 crc kubenswrapper[4811]: E0219 10:52:28.584765 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:52:34 crc kubenswrapper[4811]: I0219 10:52:34.910515 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 19 10:52:34 crc kubenswrapper[4811]: E0219 10:52:34.911739 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a" containerName="extract-utilities" Feb 19 10:52:34 crc kubenswrapper[4811]: I0219 10:52:34.911764 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a" containerName="extract-utilities" Feb 19 10:52:34 crc kubenswrapper[4811]: E0219 10:52:34.911775 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a" containerName="registry-server" Feb 19 10:52:34 crc kubenswrapper[4811]: I0219 10:52:34.911783 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a" containerName="registry-server" Feb 19 10:52:34 crc kubenswrapper[4811]: E0219 10:52:34.911798 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a" containerName="extract-content" Feb 19 10:52:34 crc kubenswrapper[4811]: I0219 10:52:34.911806 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a" containerName="extract-content" Feb 19 10:52:34 crc kubenswrapper[4811]: I0219 10:52:34.912073 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d9f7af1-e2c6-4479-8f73-89fad6fb3f6a" containerName="registry-server" Feb 19 10:52:34 crc kubenswrapper[4811]: I0219 10:52:34.913026 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 19 10:52:34 crc kubenswrapper[4811]: I0219 10:52:34.915050 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 19 10:52:34 crc kubenswrapper[4811]: I0219 10:52:34.915244 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 19 10:52:34 crc kubenswrapper[4811]: I0219 10:52:34.915492 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-kvmmx" Feb 19 10:52:34 crc kubenswrapper[4811]: I0219 10:52:34.917779 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 19 10:52:34 crc kubenswrapper[4811]: I0219 10:52:34.921954 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 19 10:52:35 crc kubenswrapper[4811]: I0219 10:52:35.056815 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4e965f02-705b-4631-879f-27070f0bfa70-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"4e965f02-705b-4631-879f-27070f0bfa70\") " pod="openstack/tempest-tests-tempest" Feb 19 10:52:35 crc kubenswrapper[4811]: I0219 10:52:35.056900 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e965f02-705b-4631-879f-27070f0bfa70-config-data\") pod \"tempest-tests-tempest\" (UID: \"4e965f02-705b-4631-879f-27070f0bfa70\") " pod="openstack/tempest-tests-tempest" Feb 19 10:52:35 crc kubenswrapper[4811]: I0219 10:52:35.056932 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"4e965f02-705b-4631-879f-27070f0bfa70\") " pod="openstack/tempest-tests-tempest" Feb 19 10:52:35 crc kubenswrapper[4811]: I0219 10:52:35.056964 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4e965f02-705b-4631-879f-27070f0bfa70-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"4e965f02-705b-4631-879f-27070f0bfa70\") " pod="openstack/tempest-tests-tempest" Feb 19 10:52:35 crc kubenswrapper[4811]: I0219 10:52:35.056996 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts7rf\" (UniqueName: \"kubernetes.io/projected/4e965f02-705b-4631-879f-27070f0bfa70-kube-api-access-ts7rf\") pod \"tempest-tests-tempest\" (UID: \"4e965f02-705b-4631-879f-27070f0bfa70\") " pod="openstack/tempest-tests-tempest" Feb 19 10:52:35 crc kubenswrapper[4811]: I0219 10:52:35.057050 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4e965f02-705b-4631-879f-27070f0bfa70-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"4e965f02-705b-4631-879f-27070f0bfa70\") " pod="openstack/tempest-tests-tempest" Feb 19 10:52:35 crc kubenswrapper[4811]: I0219 10:52:35.057094 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4e965f02-705b-4631-879f-27070f0bfa70-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"4e965f02-705b-4631-879f-27070f0bfa70\") " pod="openstack/tempest-tests-tempest" Feb 19 10:52:35 crc kubenswrapper[4811]: I0219 10:52:35.057118 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4e965f02-705b-4631-879f-27070f0bfa70-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"4e965f02-705b-4631-879f-27070f0bfa70\") " pod="openstack/tempest-tests-tempest" Feb 19 10:52:35 crc kubenswrapper[4811]: I0219 10:52:35.057219 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4e965f02-705b-4631-879f-27070f0bfa70-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"4e965f02-705b-4631-879f-27070f0bfa70\") " pod="openstack/tempest-tests-tempest" Feb 19 10:52:35 crc kubenswrapper[4811]: I0219 10:52:35.159064 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4e965f02-705b-4631-879f-27070f0bfa70-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"4e965f02-705b-4631-879f-27070f0bfa70\") " pod="openstack/tempest-tests-tempest" Feb 19 10:52:35 crc kubenswrapper[4811]: I0219 10:52:35.159188 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4e965f02-705b-4631-879f-27070f0bfa70-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"4e965f02-705b-4631-879f-27070f0bfa70\") " pod="openstack/tempest-tests-tempest" Feb 19 10:52:35 crc kubenswrapper[4811]: I0219 10:52:35.159274 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e965f02-705b-4631-879f-27070f0bfa70-config-data\") pod \"tempest-tests-tempest\" (UID: \"4e965f02-705b-4631-879f-27070f0bfa70\") " pod="openstack/tempest-tests-tempest" Feb 19 10:52:35 crc kubenswrapper[4811]: I0219 10:52:35.159303 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"4e965f02-705b-4631-879f-27070f0bfa70\") " pod="openstack/tempest-tests-tempest" Feb 19 10:52:35 crc kubenswrapper[4811]: I0219 10:52:35.159326 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4e965f02-705b-4631-879f-27070f0bfa70-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"4e965f02-705b-4631-879f-27070f0bfa70\") " pod="openstack/tempest-tests-tempest" Feb 19 10:52:35 crc kubenswrapper[4811]: I0219 10:52:35.159349 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts7rf\" (UniqueName: \"kubernetes.io/projected/4e965f02-705b-4631-879f-27070f0bfa70-kube-api-access-ts7rf\") pod \"tempest-tests-tempest\" (UID: \"4e965f02-705b-4631-879f-27070f0bfa70\") " pod="openstack/tempest-tests-tempest" Feb 19 10:52:35 crc kubenswrapper[4811]: I0219 10:52:35.159382 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4e965f02-705b-4631-879f-27070f0bfa70-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"4e965f02-705b-4631-879f-27070f0bfa70\") " pod="openstack/tempest-tests-tempest" Feb 19 10:52:35 crc kubenswrapper[4811]: I0219 10:52:35.159423 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4e965f02-705b-4631-879f-27070f0bfa70-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"4e965f02-705b-4631-879f-27070f0bfa70\") " pod="openstack/tempest-tests-tempest" Feb 19 10:52:35 crc kubenswrapper[4811]: I0219 10:52:35.159485 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4e965f02-705b-4631-879f-27070f0bfa70-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"4e965f02-705b-4631-879f-27070f0bfa70\") " pod="openstack/tempest-tests-tempest" Feb 19 10:52:35 crc kubenswrapper[4811]: I0219 10:52:35.159860 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"4e965f02-705b-4631-879f-27070f0bfa70\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/tempest-tests-tempest" Feb 19 10:52:35 crc kubenswrapper[4811]: I0219 10:52:35.159994 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4e965f02-705b-4631-879f-27070f0bfa70-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"4e965f02-705b-4631-879f-27070f0bfa70\") " pod="openstack/tempest-tests-tempest" Feb 19 10:52:35 crc kubenswrapper[4811]: I0219 10:52:35.160248 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4e965f02-705b-4631-879f-27070f0bfa70-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"4e965f02-705b-4631-879f-27070f0bfa70\") " pod="openstack/tempest-tests-tempest" Feb 19 10:52:35 crc kubenswrapper[4811]: I0219 10:52:35.161509 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4e965f02-705b-4631-879f-27070f0bfa70-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"4e965f02-705b-4631-879f-27070f0bfa70\") " pod="openstack/tempest-tests-tempest" Feb 19 10:52:35 crc kubenswrapper[4811]: I0219 10:52:35.161857 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e965f02-705b-4631-879f-27070f0bfa70-config-data\") pod \"tempest-tests-tempest\" (UID: \"4e965f02-705b-4631-879f-27070f0bfa70\") " pod="openstack/tempest-tests-tempest" Feb 19 10:52:35 crc kubenswrapper[4811]: I0219 10:52:35.165033 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4e965f02-705b-4631-879f-27070f0bfa70-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"4e965f02-705b-4631-879f-27070f0bfa70\") " pod="openstack/tempest-tests-tempest" Feb 19 10:52:35 crc kubenswrapper[4811]: I0219 10:52:35.171962 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4e965f02-705b-4631-879f-27070f0bfa70-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"4e965f02-705b-4631-879f-27070f0bfa70\") " pod="openstack/tempest-tests-tempest" Feb 19 10:52:35 crc kubenswrapper[4811]: I0219 10:52:35.172589 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4e965f02-705b-4631-879f-27070f0bfa70-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"4e965f02-705b-4631-879f-27070f0bfa70\") " pod="openstack/tempest-tests-tempest" Feb 19 10:52:35 crc kubenswrapper[4811]: I0219 10:52:35.181864 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts7rf\" (UniqueName: \"kubernetes.io/projected/4e965f02-705b-4631-879f-27070f0bfa70-kube-api-access-ts7rf\") pod \"tempest-tests-tempest\" (UID: \"4e965f02-705b-4631-879f-27070f0bfa70\") " pod="openstack/tempest-tests-tempest" Feb 19 10:52:35 crc kubenswrapper[4811]: I0219 10:52:35.196388 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"4e965f02-705b-4631-879f-27070f0bfa70\") " pod="openstack/tempest-tests-tempest" Feb 19 10:52:35 crc kubenswrapper[4811]: I0219 10:52:35.249808 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 19 10:52:35 crc kubenswrapper[4811]: I0219 10:52:35.761236 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 19 10:52:36 crc kubenswrapper[4811]: I0219 10:52:36.734394 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"4e965f02-705b-4631-879f-27070f0bfa70","Type":"ContainerStarted","Data":"1c93c09b80b57980d7f53321e443b85249f6d721cc01a9cf3c3d7fe285eb1470"} Feb 19 10:52:42 crc kubenswrapper[4811]: I0219 10:52:42.591089 4811 scope.go:117] "RemoveContainer" containerID="d39fec6eee1a4776c6f310ef7d9dd213a092ee3da120aad38ee64de4b6939d2d" Feb 19 10:52:42 crc kubenswrapper[4811]: E0219 10:52:42.591758 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:52:54 crc kubenswrapper[4811]: I0219 10:52:54.583683 4811 scope.go:117] "RemoveContainer" containerID="d39fec6eee1a4776c6f310ef7d9dd213a092ee3da120aad38ee64de4b6939d2d" Feb 19 10:52:54 crc kubenswrapper[4811]: E0219 10:52:54.585033 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:53:07 crc kubenswrapper[4811]: I0219 10:53:07.583719 4811 scope.go:117] "RemoveContainer" containerID="d39fec6eee1a4776c6f310ef7d9dd213a092ee3da120aad38ee64de4b6939d2d" Feb 19 10:53:07 crc kubenswrapper[4811]: E0219 10:53:07.584612 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:53:11 crc kubenswrapper[4811]: E0219 10:53:11.250296 4811 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 19 10:53:11 crc kubenswrapper[4811]: E0219 10:53:11.251255 4811 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ts7rf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(4e965f02-705b-4631-879f-27070f0bfa70): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 10:53:11 crc kubenswrapper[4811]: E0219 10:53:11.252569 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="4e965f02-705b-4631-879f-27070f0bfa70" Feb 19 10:53:12 crc kubenswrapper[4811]: E0219 10:53:12.156935 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="4e965f02-705b-4631-879f-27070f0bfa70" Feb 19 10:53:19 crc kubenswrapper[4811]: I0219 10:53:19.584808 4811 scope.go:117] "RemoveContainer" containerID="d39fec6eee1a4776c6f310ef7d9dd213a092ee3da120aad38ee64de4b6939d2d" Feb 19 10:53:19 crc kubenswrapper[4811]: E0219 10:53:19.585717 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 10:53:29 crc kubenswrapper[4811]: I0219 10:53:29.318427 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"4e965f02-705b-4631-879f-27070f0bfa70","Type":"ContainerStarted","Data":"c046ea650308c44ae01ed4794560e05bd11486240cfebf3d4faaa5681052d459"} Feb 19 10:53:29 crc kubenswrapper[4811]: I0219 10:53:29.347897 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.68571874 podStartE2EDuration="56.347876306s" podCreationTimestamp="2026-02-19 10:52:33 +0000 UTC" firstStartedPulling="2026-02-19 10:52:35.767335215 +0000 UTC m=+2644.293799901" lastFinishedPulling="2026-02-19 10:53:27.429492781 +0000 UTC m=+2695.955957467" observedRunningTime="2026-02-19 10:53:29.336502295 +0000 UTC m=+2697.862966981" watchObservedRunningTime="2026-02-19 10:53:29.347876306 +0000 UTC m=+2697.874340992" Feb 19 10:53:31 crc kubenswrapper[4811]: I0219 10:53:31.584438 4811 scope.go:117] "RemoveContainer" containerID="d39fec6eee1a4776c6f310ef7d9dd213a092ee3da120aad38ee64de4b6939d2d" Feb 19 10:53:32 crc kubenswrapper[4811]: I0219 10:53:32.353909 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" event={"ID":"e2e42c80-bece-4066-abad-05bc661d33f5","Type":"ContainerStarted","Data":"3cad5c62169d9d1445934bace3f29b0ddeeee2a1ec27e64e9dd4766abcb33b39"} Feb 19 10:55:56 crc kubenswrapper[4811]: I0219 10:55:56.788167 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:55:56 crc kubenswrapper[4811]: I0219 10:55:56.788921 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:56:24 crc kubenswrapper[4811]: I0219 10:56:24.756698 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t4pq8"] Feb 19 10:56:24 crc kubenswrapper[4811]: I0219 10:56:24.760867 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t4pq8" Feb 19 10:56:24 crc kubenswrapper[4811]: I0219 10:56:24.777751 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t4pq8"] Feb 19 10:56:24 crc kubenswrapper[4811]: I0219 10:56:24.887361 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ade206fc-f30e-40af-844f-315370abf933-utilities\") pod \"certified-operators-t4pq8\" (UID: \"ade206fc-f30e-40af-844f-315370abf933\") " pod="openshift-marketplace/certified-operators-t4pq8" Feb 19 10:56:24 crc kubenswrapper[4811]: I0219 10:56:24.887586 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ade206fc-f30e-40af-844f-315370abf933-catalog-content\") pod \"certified-operators-t4pq8\" (UID: \"ade206fc-f30e-40af-844f-315370abf933\") " pod="openshift-marketplace/certified-operators-t4pq8" Feb 19 10:56:24 crc kubenswrapper[4811]: I0219 10:56:24.887718 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5jmp\" (UniqueName: \"kubernetes.io/projected/ade206fc-f30e-40af-844f-315370abf933-kube-api-access-w5jmp\") pod \"certified-operators-t4pq8\" (UID: \"ade206fc-f30e-40af-844f-315370abf933\") " pod="openshift-marketplace/certified-operators-t4pq8" Feb 19 10:56:24 crc kubenswrapper[4811]: I0219 10:56:24.990363 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ade206fc-f30e-40af-844f-315370abf933-utilities\") pod \"certified-operators-t4pq8\" (UID: \"ade206fc-f30e-40af-844f-315370abf933\") " pod="openshift-marketplace/certified-operators-t4pq8" Feb 19 10:56:24 crc kubenswrapper[4811]: I0219 10:56:24.990520 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ade206fc-f30e-40af-844f-315370abf933-catalog-content\") pod \"certified-operators-t4pq8\" (UID: \"ade206fc-f30e-40af-844f-315370abf933\") " pod="openshift-marketplace/certified-operators-t4pq8" Feb 19 10:56:24 crc kubenswrapper[4811]: I0219 10:56:24.990562 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5jmp\" (UniqueName: \"kubernetes.io/projected/ade206fc-f30e-40af-844f-315370abf933-kube-api-access-w5jmp\") pod \"certified-operators-t4pq8\" (UID: \"ade206fc-f30e-40af-844f-315370abf933\") " pod="openshift-marketplace/certified-operators-t4pq8" Feb 19 10:56:24 crc kubenswrapper[4811]: I0219 10:56:24.991049 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ade206fc-f30e-40af-844f-315370abf933-utilities\") pod \"certified-operators-t4pq8\" (UID: \"ade206fc-f30e-40af-844f-315370abf933\") " pod="openshift-marketplace/certified-operators-t4pq8" Feb 19 10:56:24 crc kubenswrapper[4811]: I0219 10:56:24.991165 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ade206fc-f30e-40af-844f-315370abf933-catalog-content\") pod \"certified-operators-t4pq8\" (UID: \"ade206fc-f30e-40af-844f-315370abf933\") " pod="openshift-marketplace/certified-operators-t4pq8" Feb 19 10:56:25 crc kubenswrapper[4811]: I0219 10:56:25.020018 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5jmp\" (UniqueName: \"kubernetes.io/projected/ade206fc-f30e-40af-844f-315370abf933-kube-api-access-w5jmp\") pod \"certified-operators-t4pq8\" (UID: \"ade206fc-f30e-40af-844f-315370abf933\") " pod="openshift-marketplace/certified-operators-t4pq8" Feb 19 10:56:25 crc kubenswrapper[4811]: I0219 10:56:25.094641 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t4pq8" Feb 19 10:56:25 crc kubenswrapper[4811]: I0219 10:56:25.694121 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t4pq8"] Feb 19 10:56:26 crc kubenswrapper[4811]: I0219 10:56:26.229673 4811 generic.go:334] "Generic (PLEG): container finished" podID="ade206fc-f30e-40af-844f-315370abf933" containerID="f6b113a9f059414ccad9c8f2da3253cfe1734e4dc7e6818aedace82c6b6d5584" exitCode=0 Feb 19 10:56:26 crc kubenswrapper[4811]: I0219 10:56:26.230185 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4pq8" event={"ID":"ade206fc-f30e-40af-844f-315370abf933","Type":"ContainerDied","Data":"f6b113a9f059414ccad9c8f2da3253cfe1734e4dc7e6818aedace82c6b6d5584"} Feb 19 10:56:26 crc kubenswrapper[4811]: I0219 10:56:26.230226 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4pq8" event={"ID":"ade206fc-f30e-40af-844f-315370abf933","Type":"ContainerStarted","Data":"d7c8668dcfedb50b6a83fe8eea3ced6d59c2e79719069a5f29986f3e364bfdef"} Feb 19 10:56:26 crc kubenswrapper[4811]: I0219 10:56:26.233935 4811 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 10:56:26 crc kubenswrapper[4811]: I0219 10:56:26.787858 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:56:26 crc kubenswrapper[4811]: I0219 10:56:26.787937 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:56:28 crc kubenswrapper[4811]: I0219 10:56:28.260823 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4pq8" event={"ID":"ade206fc-f30e-40af-844f-315370abf933","Type":"ContainerStarted","Data":"e66c78d1e55a0cbac6a7562da5f01140619339dbd6ec02f904d3109becba546b"} Feb 19 10:56:29 crc kubenswrapper[4811]: I0219 10:56:29.271201 4811 generic.go:334] "Generic (PLEG): container finished" podID="ade206fc-f30e-40af-844f-315370abf933" containerID="e66c78d1e55a0cbac6a7562da5f01140619339dbd6ec02f904d3109becba546b" exitCode=0 Feb 19 10:56:29 crc kubenswrapper[4811]: I0219 10:56:29.271298 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4pq8" event={"ID":"ade206fc-f30e-40af-844f-315370abf933","Type":"ContainerDied","Data":"e66c78d1e55a0cbac6a7562da5f01140619339dbd6ec02f904d3109becba546b"} Feb 19 10:56:31 crc kubenswrapper[4811]: I0219 10:56:31.312662 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4pq8" event={"ID":"ade206fc-f30e-40af-844f-315370abf933","Type":"ContainerStarted","Data":"979cb751c303bbf45aff6df7053c2c09accb63fb0277ab8a4d3a2d3b0b44e6ac"} Feb 19 10:56:31 crc kubenswrapper[4811]: I0219 10:56:31.344992 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t4pq8" podStartSLOduration=3.340641611 podStartE2EDuration="7.344922627s" podCreationTimestamp="2026-02-19 10:56:24 +0000 UTC" firstStartedPulling="2026-02-19 10:56:26.233624784 +0000 UTC m=+2874.760089470" lastFinishedPulling="2026-02-19 10:56:30.2379058 +0000 UTC m=+2878.764370486" observedRunningTime="2026-02-19 10:56:31.335699256 +0000 UTC m=+2879.862163962" watchObservedRunningTime="2026-02-19 10:56:31.344922627 +0000 UTC m=+2879.871387313" Feb 19 10:56:35 crc kubenswrapper[4811]: I0219 10:56:35.094859 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t4pq8" Feb 19 10:56:35 crc kubenswrapper[4811]: I0219 10:56:35.096370 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t4pq8" Feb 19 10:56:35 crc kubenswrapper[4811]: I0219 10:56:35.186417 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t4pq8" Feb 19 10:56:35 crc kubenswrapper[4811]: I0219 10:56:35.409526 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t4pq8" Feb 19 10:56:35 crc kubenswrapper[4811]: I0219 10:56:35.473999 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t4pq8"] Feb 19 10:56:37 crc kubenswrapper[4811]: I0219 10:56:37.375993 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t4pq8" podUID="ade206fc-f30e-40af-844f-315370abf933" containerName="registry-server" containerID="cri-o://979cb751c303bbf45aff6df7053c2c09accb63fb0277ab8a4d3a2d3b0b44e6ac" gracePeriod=2 Feb 19 10:56:37 crc kubenswrapper[4811]: I0219 10:56:37.929775 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t4pq8" Feb 19 10:56:38 crc kubenswrapper[4811]: I0219 10:56:38.028210 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ade206fc-f30e-40af-844f-315370abf933-utilities\") pod \"ade206fc-f30e-40af-844f-315370abf933\" (UID: \"ade206fc-f30e-40af-844f-315370abf933\") " Feb 19 10:56:38 crc kubenswrapper[4811]: I0219 10:56:38.028388 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5jmp\" (UniqueName: \"kubernetes.io/projected/ade206fc-f30e-40af-844f-315370abf933-kube-api-access-w5jmp\") pod \"ade206fc-f30e-40af-844f-315370abf933\" (UID: \"ade206fc-f30e-40af-844f-315370abf933\") " Feb 19 10:56:38 crc kubenswrapper[4811]: I0219 10:56:38.028753 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ade206fc-f30e-40af-844f-315370abf933-catalog-content\") pod \"ade206fc-f30e-40af-844f-315370abf933\" (UID: \"ade206fc-f30e-40af-844f-315370abf933\") " Feb 19 10:56:38 crc kubenswrapper[4811]: I0219 10:56:38.030373 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ade206fc-f30e-40af-844f-315370abf933-utilities" (OuterVolumeSpecName: "utilities") pod "ade206fc-f30e-40af-844f-315370abf933" (UID: "ade206fc-f30e-40af-844f-315370abf933"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:56:38 crc kubenswrapper[4811]: I0219 10:56:38.035212 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ade206fc-f30e-40af-844f-315370abf933-kube-api-access-w5jmp" (OuterVolumeSpecName: "kube-api-access-w5jmp") pod "ade206fc-f30e-40af-844f-315370abf933" (UID: "ade206fc-f30e-40af-844f-315370abf933"). InnerVolumeSpecName "kube-api-access-w5jmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 10:56:38 crc kubenswrapper[4811]: I0219 10:56:38.131305 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ade206fc-f30e-40af-844f-315370abf933-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 10:56:38 crc kubenswrapper[4811]: I0219 10:56:38.131359 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5jmp\" (UniqueName: \"kubernetes.io/projected/ade206fc-f30e-40af-844f-315370abf933-kube-api-access-w5jmp\") on node \"crc\" DevicePath \"\"" Feb 19 10:56:38 crc kubenswrapper[4811]: I0219 10:56:38.391377 4811 generic.go:334] "Generic (PLEG): container finished" podID="ade206fc-f30e-40af-844f-315370abf933" containerID="979cb751c303bbf45aff6df7053c2c09accb63fb0277ab8a4d3a2d3b0b44e6ac" exitCode=0 Feb 19 10:56:38 crc kubenswrapper[4811]: I0219 10:56:38.391452 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4pq8" event={"ID":"ade206fc-f30e-40af-844f-315370abf933","Type":"ContainerDied","Data":"979cb751c303bbf45aff6df7053c2c09accb63fb0277ab8a4d3a2d3b0b44e6ac"} Feb 19 10:56:38 crc kubenswrapper[4811]: I0219 10:56:38.391524 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t4pq8" event={"ID":"ade206fc-f30e-40af-844f-315370abf933","Type":"ContainerDied","Data":"d7c8668dcfedb50b6a83fe8eea3ced6d59c2e79719069a5f29986f3e364bfdef"} Feb 19 10:56:38 crc kubenswrapper[4811]: I0219 10:56:38.391559 4811 scope.go:117] "RemoveContainer" containerID="979cb751c303bbf45aff6df7053c2c09accb63fb0277ab8a4d3a2d3b0b44e6ac" Feb 19 10:56:38 crc kubenswrapper[4811]: I0219 10:56:38.391708 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t4pq8" Feb 19 10:56:38 crc kubenswrapper[4811]: I0219 10:56:38.419904 4811 scope.go:117] "RemoveContainer" containerID="e66c78d1e55a0cbac6a7562da5f01140619339dbd6ec02f904d3109becba546b" Feb 19 10:56:38 crc kubenswrapper[4811]: I0219 10:56:38.447331 4811 scope.go:117] "RemoveContainer" containerID="f6b113a9f059414ccad9c8f2da3253cfe1734e4dc7e6818aedace82c6b6d5584" Feb 19 10:56:38 crc kubenswrapper[4811]: I0219 10:56:38.495255 4811 scope.go:117] "RemoveContainer" containerID="979cb751c303bbf45aff6df7053c2c09accb63fb0277ab8a4d3a2d3b0b44e6ac" Feb 19 10:56:38 crc kubenswrapper[4811]: E0219 10:56:38.495941 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"979cb751c303bbf45aff6df7053c2c09accb63fb0277ab8a4d3a2d3b0b44e6ac\": container with ID starting with 979cb751c303bbf45aff6df7053c2c09accb63fb0277ab8a4d3a2d3b0b44e6ac not found: ID does not exist" containerID="979cb751c303bbf45aff6df7053c2c09accb63fb0277ab8a4d3a2d3b0b44e6ac" Feb 19 10:56:38 crc kubenswrapper[4811]: I0219 10:56:38.495991 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"979cb751c303bbf45aff6df7053c2c09accb63fb0277ab8a4d3a2d3b0b44e6ac"} err="failed to get container status \"979cb751c303bbf45aff6df7053c2c09accb63fb0277ab8a4d3a2d3b0b44e6ac\": rpc error: code = NotFound desc = could not find container \"979cb751c303bbf45aff6df7053c2c09accb63fb0277ab8a4d3a2d3b0b44e6ac\": container with ID starting with 979cb751c303bbf45aff6df7053c2c09accb63fb0277ab8a4d3a2d3b0b44e6ac not found: ID does not exist" Feb 19 10:56:38 crc kubenswrapper[4811]: I0219 10:56:38.496023 4811 scope.go:117] "RemoveContainer" containerID="e66c78d1e55a0cbac6a7562da5f01140619339dbd6ec02f904d3109becba546b" Feb 19 10:56:38 crc kubenswrapper[4811]: E0219 10:56:38.496363 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e66c78d1e55a0cbac6a7562da5f01140619339dbd6ec02f904d3109becba546b\": container with ID starting with e66c78d1e55a0cbac6a7562da5f01140619339dbd6ec02f904d3109becba546b not found: ID does not exist" containerID="e66c78d1e55a0cbac6a7562da5f01140619339dbd6ec02f904d3109becba546b" Feb 19 10:56:38 crc kubenswrapper[4811]: I0219 10:56:38.496391 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e66c78d1e55a0cbac6a7562da5f01140619339dbd6ec02f904d3109becba546b"} err="failed to get container status \"e66c78d1e55a0cbac6a7562da5f01140619339dbd6ec02f904d3109becba546b\": rpc error: code = NotFound desc = could not find container \"e66c78d1e55a0cbac6a7562da5f01140619339dbd6ec02f904d3109becba546b\": container with ID starting with e66c78d1e55a0cbac6a7562da5f01140619339dbd6ec02f904d3109becba546b not found: ID does not exist" Feb 19 10:56:38 crc kubenswrapper[4811]: I0219 10:56:38.496409 4811 scope.go:117] "RemoveContainer" containerID="f6b113a9f059414ccad9c8f2da3253cfe1734e4dc7e6818aedace82c6b6d5584" Feb 19 10:56:38 crc kubenswrapper[4811]: E0219 10:56:38.496706 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6b113a9f059414ccad9c8f2da3253cfe1734e4dc7e6818aedace82c6b6d5584\": container with ID starting with f6b113a9f059414ccad9c8f2da3253cfe1734e4dc7e6818aedace82c6b6d5584 not found: ID does not exist" containerID="f6b113a9f059414ccad9c8f2da3253cfe1734e4dc7e6818aedace82c6b6d5584" Feb 19 10:56:38 crc kubenswrapper[4811]: I0219 10:56:38.496732 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6b113a9f059414ccad9c8f2da3253cfe1734e4dc7e6818aedace82c6b6d5584"} err="failed to get container status \"f6b113a9f059414ccad9c8f2da3253cfe1734e4dc7e6818aedace82c6b6d5584\": rpc error: code = NotFound desc = could not find container \"f6b113a9f059414ccad9c8f2da3253cfe1734e4dc7e6818aedace82c6b6d5584\": container with ID starting with f6b113a9f059414ccad9c8f2da3253cfe1734e4dc7e6818aedace82c6b6d5584 not found: ID does not exist" Feb 19 10:56:39 crc kubenswrapper[4811]: I0219 10:56:39.185614 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ade206fc-f30e-40af-844f-315370abf933-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ade206fc-f30e-40af-844f-315370abf933" (UID: "ade206fc-f30e-40af-844f-315370abf933"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 10:56:39 crc kubenswrapper[4811]: I0219 10:56:39.256941 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ade206fc-f30e-40af-844f-315370abf933-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 10:56:39 crc kubenswrapper[4811]: I0219 10:56:39.331422 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t4pq8"] Feb 19 10:56:39 crc kubenswrapper[4811]: I0219 10:56:39.340988 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t4pq8"] Feb 19 10:56:40 crc kubenswrapper[4811]: I0219 10:56:40.597431 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ade206fc-f30e-40af-844f-315370abf933" path="/var/lib/kubelet/pods/ade206fc-f30e-40af-844f-315370abf933/volumes" Feb 19 10:56:56 crc kubenswrapper[4811]: I0219 10:56:56.789086 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:56:56 crc kubenswrapper[4811]: I0219 10:56:56.789796 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:56:56 crc kubenswrapper[4811]: I0219 10:56:56.789861 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" Feb 19 10:56:56 crc kubenswrapper[4811]: I0219 10:56:56.790750 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3cad5c62169d9d1445934bace3f29b0ddeeee2a1ec27e64e9dd4766abcb33b39"} pod="openshift-machine-config-operator/machine-config-daemon-v97zm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 10:56:56 crc kubenswrapper[4811]: I0219 10:56:56.790819 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" containerID="cri-o://3cad5c62169d9d1445934bace3f29b0ddeeee2a1ec27e64e9dd4766abcb33b39" gracePeriod=600 Feb 19 10:56:57 crc kubenswrapper[4811]: I0219 10:56:57.578333 4811 generic.go:334] "Generic (PLEG): container finished" podID="e2e42c80-bece-4066-abad-05bc661d33f5" containerID="3cad5c62169d9d1445934bace3f29b0ddeeee2a1ec27e64e9dd4766abcb33b39" exitCode=0 Feb 19 10:56:57 crc kubenswrapper[4811]: I0219 10:56:57.578416 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" event={"ID":"e2e42c80-bece-4066-abad-05bc661d33f5","Type":"ContainerDied","Data":"3cad5c62169d9d1445934bace3f29b0ddeeee2a1ec27e64e9dd4766abcb33b39"} Feb 19 10:56:57 crc kubenswrapper[4811]: I0219 10:56:57.578701 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" event={"ID":"e2e42c80-bece-4066-abad-05bc661d33f5","Type":"ContainerStarted","Data":"ecb77dd42fde5f6c3909c44a47ea1c0a181a1a0c7a82b7351079e66b3bbac5f5"} Feb 19 10:56:57 crc kubenswrapper[4811]: I0219 10:56:57.578732 4811 scope.go:117] "RemoveContainer" containerID="d39fec6eee1a4776c6f310ef7d9dd213a092ee3da120aad38ee64de4b6939d2d" Feb 19 10:59:26 crc kubenswrapper[4811]: I0219 10:59:26.788493 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:59:26 crc kubenswrapper[4811]: I0219 10:59:26.789569 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 10:59:56 crc kubenswrapper[4811]: I0219 10:59:56.788108 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 10:59:56 crc kubenswrapper[4811]: I0219 10:59:56.788764 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 11:00:00 crc kubenswrapper[4811]: I0219 11:00:00.160671 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524980-78p9p"] Feb 19 11:00:00 crc kubenswrapper[4811]: E0219 11:00:00.162328 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade206fc-f30e-40af-844f-315370abf933" containerName="registry-server" Feb 19 11:00:00 crc kubenswrapper[4811]: I0219 11:00:00.162348 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade206fc-f30e-40af-844f-315370abf933" containerName="registry-server" Feb 19 11:00:00 crc kubenswrapper[4811]: E0219 11:00:00.162378 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade206fc-f30e-40af-844f-315370abf933" containerName="extract-utilities" Feb 19 11:00:00 crc kubenswrapper[4811]: I0219 11:00:00.162389 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade206fc-f30e-40af-844f-315370abf933" containerName="extract-utilities" Feb 19 11:00:00 crc kubenswrapper[4811]: E0219 11:00:00.162395 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade206fc-f30e-40af-844f-315370abf933" containerName="extract-content" Feb 19 11:00:00 crc kubenswrapper[4811]: I0219 11:00:00.162402 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade206fc-f30e-40af-844f-315370abf933" containerName="extract-content" Feb 19 11:00:00 crc kubenswrapper[4811]: I0219 11:00:00.162650 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="ade206fc-f30e-40af-844f-315370abf933" containerName="registry-server" Feb 19 11:00:00 crc kubenswrapper[4811]: I0219 11:00:00.163627 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-78p9p" Feb 19 11:00:00 crc kubenswrapper[4811]: I0219 11:00:00.166064 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 11:00:00 crc kubenswrapper[4811]: I0219 11:00:00.166719 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 11:00:00 crc kubenswrapper[4811]: I0219 11:00:00.176028 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524980-78p9p"] Feb 19 11:00:00 crc kubenswrapper[4811]: I0219 11:00:00.230944 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bnff\" (UniqueName: \"kubernetes.io/projected/94d66423-926c-4699-90c2-e5267020805f-kube-api-access-5bnff\") pod \"collect-profiles-29524980-78p9p\" (UID: \"94d66423-926c-4699-90c2-e5267020805f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-78p9p" Feb 19 11:00:00 crc kubenswrapper[4811]: I0219 11:00:00.231047 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94d66423-926c-4699-90c2-e5267020805f-config-volume\") pod \"collect-profiles-29524980-78p9p\" (UID: \"94d66423-926c-4699-90c2-e5267020805f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-78p9p" Feb 19 11:00:00 crc kubenswrapper[4811]: I0219 11:00:00.231107 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/94d66423-926c-4699-90c2-e5267020805f-secret-volume\") pod \"collect-profiles-29524980-78p9p\" (UID: \"94d66423-926c-4699-90c2-e5267020805f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-78p9p" Feb 19 11:00:00 crc kubenswrapper[4811]: I0219 11:00:00.332725 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bnff\" (UniqueName: \"kubernetes.io/projected/94d66423-926c-4699-90c2-e5267020805f-kube-api-access-5bnff\") pod \"collect-profiles-29524980-78p9p\" (UID: \"94d66423-926c-4699-90c2-e5267020805f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-78p9p" Feb 19 11:00:00 crc kubenswrapper[4811]: I0219 11:00:00.332806 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94d66423-926c-4699-90c2-e5267020805f-config-volume\") pod \"collect-profiles-29524980-78p9p\" (UID: \"94d66423-926c-4699-90c2-e5267020805f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-78p9p" Feb 19 11:00:00 crc kubenswrapper[4811]: I0219 11:00:00.332853 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/94d66423-926c-4699-90c2-e5267020805f-secret-volume\") pod \"collect-profiles-29524980-78p9p\" (UID: \"94d66423-926c-4699-90c2-e5267020805f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-78p9p" Feb 19 11:00:00 crc kubenswrapper[4811]: I0219 11:00:00.334323 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94d66423-926c-4699-90c2-e5267020805f-config-volume\") pod \"collect-profiles-29524980-78p9p\" (UID: \"94d66423-926c-4699-90c2-e5267020805f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-78p9p" Feb 19 11:00:00 crc kubenswrapper[4811]: I0219 11:00:00.341750 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/94d66423-926c-4699-90c2-e5267020805f-secret-volume\") pod \"collect-profiles-29524980-78p9p\" (UID: \"94d66423-926c-4699-90c2-e5267020805f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-78p9p" Feb 19 11:00:00 crc kubenswrapper[4811]: I0219 11:00:00.352425 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bnff\" (UniqueName: \"kubernetes.io/projected/94d66423-926c-4699-90c2-e5267020805f-kube-api-access-5bnff\") pod \"collect-profiles-29524980-78p9p\" (UID: \"94d66423-926c-4699-90c2-e5267020805f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-78p9p" Feb 19 11:00:00 crc kubenswrapper[4811]: I0219 11:00:00.508249 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-78p9p" Feb 19 11:00:00 crc kubenswrapper[4811]: I0219 11:00:00.970064 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524980-78p9p"] Feb 19 11:00:01 crc kubenswrapper[4811]: I0219 11:00:01.418394 4811 generic.go:334] "Generic (PLEG): container finished" podID="94d66423-926c-4699-90c2-e5267020805f" containerID="08bd12238ba5532a3e29a700e3ac775cc1359818a030ed5ab5410c2613665627" exitCode=0 Feb 19 11:00:01 crc kubenswrapper[4811]: I0219 11:00:01.418511 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-78p9p" event={"ID":"94d66423-926c-4699-90c2-e5267020805f","Type":"ContainerDied","Data":"08bd12238ba5532a3e29a700e3ac775cc1359818a030ed5ab5410c2613665627"} Feb 19 11:00:01 crc kubenswrapper[4811]: I0219 11:00:01.418757 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-78p9p" event={"ID":"94d66423-926c-4699-90c2-e5267020805f","Type":"ContainerStarted","Data":"9896d4a77ff60047050a1b97c0d971c83d80fc3a2666c61667628b2a8bc5f42c"} Feb 19 11:00:02 crc kubenswrapper[4811]: I0219 11:00:02.835961 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-78p9p" Feb 19 11:00:03 crc kubenswrapper[4811]: I0219 11:00:03.006428 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/94d66423-926c-4699-90c2-e5267020805f-secret-volume\") pod \"94d66423-926c-4699-90c2-e5267020805f\" (UID: \"94d66423-926c-4699-90c2-e5267020805f\") " Feb 19 11:00:03 crc kubenswrapper[4811]: I0219 11:00:03.007000 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bnff\" (UniqueName: \"kubernetes.io/projected/94d66423-926c-4699-90c2-e5267020805f-kube-api-access-5bnff\") pod \"94d66423-926c-4699-90c2-e5267020805f\" (UID: \"94d66423-926c-4699-90c2-e5267020805f\") " Feb 19 11:00:03 crc kubenswrapper[4811]: I0219 11:00:03.007144 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94d66423-926c-4699-90c2-e5267020805f-config-volume\") pod \"94d66423-926c-4699-90c2-e5267020805f\" (UID: \"94d66423-926c-4699-90c2-e5267020805f\") " Feb 19 11:00:03 crc kubenswrapper[4811]: I0219 11:00:03.008185 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94d66423-926c-4699-90c2-e5267020805f-config-volume" (OuterVolumeSpecName: "config-volume") pod "94d66423-926c-4699-90c2-e5267020805f" (UID: "94d66423-926c-4699-90c2-e5267020805f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 11:00:03 crc kubenswrapper[4811]: I0219 11:00:03.015266 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94d66423-926c-4699-90c2-e5267020805f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "94d66423-926c-4699-90c2-e5267020805f" (UID: "94d66423-926c-4699-90c2-e5267020805f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 11:00:03 crc kubenswrapper[4811]: I0219 11:00:03.015708 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94d66423-926c-4699-90c2-e5267020805f-kube-api-access-5bnff" (OuterVolumeSpecName: "kube-api-access-5bnff") pod "94d66423-926c-4699-90c2-e5267020805f" (UID: "94d66423-926c-4699-90c2-e5267020805f"). InnerVolumeSpecName "kube-api-access-5bnff". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:00:03 crc kubenswrapper[4811]: I0219 11:00:03.109219 4811 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94d66423-926c-4699-90c2-e5267020805f-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 11:00:03 crc kubenswrapper[4811]: I0219 11:00:03.109255 4811 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/94d66423-926c-4699-90c2-e5267020805f-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 11:00:03 crc kubenswrapper[4811]: I0219 11:00:03.109268 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bnff\" (UniqueName: \"kubernetes.io/projected/94d66423-926c-4699-90c2-e5267020805f-kube-api-access-5bnff\") on node \"crc\" DevicePath \"\"" Feb 19 11:00:03 crc kubenswrapper[4811]: I0219 11:00:03.439652 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-78p9p" event={"ID":"94d66423-926c-4699-90c2-e5267020805f","Type":"ContainerDied","Data":"9896d4a77ff60047050a1b97c0d971c83d80fc3a2666c61667628b2a8bc5f42c"} Feb 19 11:00:03 crc kubenswrapper[4811]: I0219 11:00:03.439719 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9896d4a77ff60047050a1b97c0d971c83d80fc3a2666c61667628b2a8bc5f42c" Feb 19 11:00:03 crc kubenswrapper[4811]: I0219 11:00:03.439730 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524980-78p9p" Feb 19 11:00:03 crc kubenswrapper[4811]: I0219 11:00:03.920233 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524935-q8dsd"] Feb 19 11:00:03 crc kubenswrapper[4811]: I0219 11:00:03.932135 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524935-q8dsd"] Feb 19 11:00:04 crc kubenswrapper[4811]: I0219 11:00:04.594635 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3a58cbe-c919-45dd-ac89-f6b9c89dcb70" path="/var/lib/kubelet/pods/e3a58cbe-c919-45dd-ac89-f6b9c89dcb70/volumes" Feb 19 11:00:11 crc kubenswrapper[4811]: I0219 11:00:11.328219 4811 scope.go:117] "RemoveContainer" containerID="4e0011a78c646b126bd8aa44f34404338d4c7e372a05a372382ba1c9cb681541" Feb 19 11:00:26 crc kubenswrapper[4811]: I0219 11:00:26.788527 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 11:00:26 crc kubenswrapper[4811]: I0219 11:00:26.789591 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 11:00:26 crc kubenswrapper[4811]: I0219 11:00:26.789670 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" Feb 19 11:00:26 crc kubenswrapper[4811]: I0219 11:00:26.790946 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ecb77dd42fde5f6c3909c44a47ea1c0a181a1a0c7a82b7351079e66b3bbac5f5"} pod="openshift-machine-config-operator/machine-config-daemon-v97zm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 11:00:26 crc kubenswrapper[4811]: I0219 11:00:26.791072 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" containerID="cri-o://ecb77dd42fde5f6c3909c44a47ea1c0a181a1a0c7a82b7351079e66b3bbac5f5" gracePeriod=600 Feb 19 11:00:26 crc kubenswrapper[4811]: E0219 11:00:26.914994 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:00:27 crc kubenswrapper[4811]: I0219 11:00:27.704750 4811 generic.go:334] "Generic (PLEG): container finished" podID="e2e42c80-bece-4066-abad-05bc661d33f5" containerID="ecb77dd42fde5f6c3909c44a47ea1c0a181a1a0c7a82b7351079e66b3bbac5f5" exitCode=0 Feb 19 11:00:27 crc kubenswrapper[4811]: I0219 11:00:27.704840 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" event={"ID":"e2e42c80-bece-4066-abad-05bc661d33f5","Type":"ContainerDied","Data":"ecb77dd42fde5f6c3909c44a47ea1c0a181a1a0c7a82b7351079e66b3bbac5f5"} Feb 19 11:00:27 crc kubenswrapper[4811]: I0219 11:00:27.705096 4811 scope.go:117] "RemoveContainer" containerID="3cad5c62169d9d1445934bace3f29b0ddeeee2a1ec27e64e9dd4766abcb33b39" Feb 19 11:00:27 crc kubenswrapper[4811]: I0219 11:00:27.705891 4811 scope.go:117] "RemoveContainer" containerID="ecb77dd42fde5f6c3909c44a47ea1c0a181a1a0c7a82b7351079e66b3bbac5f5" Feb 19 11:00:27 crc kubenswrapper[4811]: E0219 11:00:27.706209 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:00:39 crc kubenswrapper[4811]: I0219 11:00:39.583598 4811 scope.go:117] "RemoveContainer" containerID="ecb77dd42fde5f6c3909c44a47ea1c0a181a1a0c7a82b7351079e66b3bbac5f5" Feb 19 11:00:39 crc kubenswrapper[4811]: E0219 11:00:39.585951 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:00:50 crc kubenswrapper[4811]: I0219 11:00:50.583594 4811 scope.go:117] "RemoveContainer" containerID="ecb77dd42fde5f6c3909c44a47ea1c0a181a1a0c7a82b7351079e66b3bbac5f5" Feb 19 11:00:50 crc kubenswrapper[4811]: E0219 11:00:50.584325 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:01:00 crc kubenswrapper[4811]: I0219 11:01:00.164339 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29524981-j4dnl"] Feb 19 11:01:00 crc kubenswrapper[4811]: E0219 11:01:00.165433 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94d66423-926c-4699-90c2-e5267020805f" containerName="collect-profiles" Feb 19 11:01:00 crc kubenswrapper[4811]: I0219 11:01:00.165466 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="94d66423-926c-4699-90c2-e5267020805f" containerName="collect-profiles" Feb 19 11:01:00 crc kubenswrapper[4811]: I0219 11:01:00.165653 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="94d66423-926c-4699-90c2-e5267020805f" containerName="collect-profiles" Feb 19 11:01:00 crc kubenswrapper[4811]: I0219 11:01:00.170680 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524981-j4dnl" Feb 19 11:01:00 crc kubenswrapper[4811]: I0219 11:01:00.191418 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29524981-j4dnl"] Feb 19 11:01:00 crc kubenswrapper[4811]: I0219 11:01:00.295656 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f7j6\" (UniqueName: \"kubernetes.io/projected/58f1765b-3836-4257-aa59-d47c7464c1e5-kube-api-access-2f7j6\") pod \"keystone-cron-29524981-j4dnl\" (UID: \"58f1765b-3836-4257-aa59-d47c7464c1e5\") " pod="openstack/keystone-cron-29524981-j4dnl" Feb 19 11:01:00 crc kubenswrapper[4811]: I0219 11:01:00.295751 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58f1765b-3836-4257-aa59-d47c7464c1e5-combined-ca-bundle\") pod \"keystone-cron-29524981-j4dnl\" (UID: \"58f1765b-3836-4257-aa59-d47c7464c1e5\") " pod="openstack/keystone-cron-29524981-j4dnl" Feb 19 11:01:00 crc kubenswrapper[4811]: I0219 11:01:00.295847 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/58f1765b-3836-4257-aa59-d47c7464c1e5-fernet-keys\") pod \"keystone-cron-29524981-j4dnl\" (UID: \"58f1765b-3836-4257-aa59-d47c7464c1e5\") " pod="openstack/keystone-cron-29524981-j4dnl" Feb 19 11:01:00 crc kubenswrapper[4811]: I0219 11:01:00.295902 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58f1765b-3836-4257-aa59-d47c7464c1e5-config-data\") pod \"keystone-cron-29524981-j4dnl\" (UID: \"58f1765b-3836-4257-aa59-d47c7464c1e5\") " pod="openstack/keystone-cron-29524981-j4dnl" Feb 19 11:01:00 crc kubenswrapper[4811]: I0219 11:01:00.398254 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f7j6\" (UniqueName: \"kubernetes.io/projected/58f1765b-3836-4257-aa59-d47c7464c1e5-kube-api-access-2f7j6\") pod \"keystone-cron-29524981-j4dnl\" (UID: \"58f1765b-3836-4257-aa59-d47c7464c1e5\") " pod="openstack/keystone-cron-29524981-j4dnl" Feb 19 11:01:00 crc kubenswrapper[4811]: I0219 11:01:00.398330 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58f1765b-3836-4257-aa59-d47c7464c1e5-combined-ca-bundle\") pod \"keystone-cron-29524981-j4dnl\" (UID: \"58f1765b-3836-4257-aa59-d47c7464c1e5\") " pod="openstack/keystone-cron-29524981-j4dnl" Feb 19 11:01:00 crc kubenswrapper[4811]: I0219 11:01:00.398376 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/58f1765b-3836-4257-aa59-d47c7464c1e5-fernet-keys\") pod \"keystone-cron-29524981-j4dnl\" (UID: \"58f1765b-3836-4257-aa59-d47c7464c1e5\") " pod="openstack/keystone-cron-29524981-j4dnl" Feb 19 11:01:00 crc kubenswrapper[4811]: I0219 11:01:00.398408 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58f1765b-3836-4257-aa59-d47c7464c1e5-config-data\") pod \"keystone-cron-29524981-j4dnl\" (UID: \"58f1765b-3836-4257-aa59-d47c7464c1e5\") " pod="openstack/keystone-cron-29524981-j4dnl" Feb 19 11:01:00 crc kubenswrapper[4811]: I0219 11:01:00.406912 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/58f1765b-3836-4257-aa59-d47c7464c1e5-fernet-keys\") pod \"keystone-cron-29524981-j4dnl\" (UID: \"58f1765b-3836-4257-aa59-d47c7464c1e5\") " pod="openstack/keystone-cron-29524981-j4dnl" Feb 19 11:01:00 crc kubenswrapper[4811]: I0219 11:01:00.409118 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58f1765b-3836-4257-aa59-d47c7464c1e5-combined-ca-bundle\") pod \"keystone-cron-29524981-j4dnl\" (UID: \"58f1765b-3836-4257-aa59-d47c7464c1e5\") " pod="openstack/keystone-cron-29524981-j4dnl" Feb 19 11:01:00 crc kubenswrapper[4811]: I0219 11:01:00.412082 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58f1765b-3836-4257-aa59-d47c7464c1e5-config-data\") pod \"keystone-cron-29524981-j4dnl\" (UID: \"58f1765b-3836-4257-aa59-d47c7464c1e5\") " pod="openstack/keystone-cron-29524981-j4dnl" Feb 19 11:01:00 crc kubenswrapper[4811]: I0219 11:01:00.418272 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f7j6\" (UniqueName: \"kubernetes.io/projected/58f1765b-3836-4257-aa59-d47c7464c1e5-kube-api-access-2f7j6\") pod \"keystone-cron-29524981-j4dnl\" (UID: \"58f1765b-3836-4257-aa59-d47c7464c1e5\") " pod="openstack/keystone-cron-29524981-j4dnl" Feb 19 11:01:00 crc kubenswrapper[4811]: I0219 11:01:00.522386 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524981-j4dnl" Feb 19 11:01:01 crc kubenswrapper[4811]: I0219 11:01:01.068032 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29524981-j4dnl"] Feb 19 11:01:02 crc kubenswrapper[4811]: I0219 11:01:02.018707 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524981-j4dnl" event={"ID":"58f1765b-3836-4257-aa59-d47c7464c1e5","Type":"ContainerStarted","Data":"94afaf69af5e0cb1f171fa4ec061a03b32809cf24692cf4979e76a45fdad5f64"} Feb 19 11:01:02 crc kubenswrapper[4811]: I0219 11:01:02.019134 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524981-j4dnl" event={"ID":"58f1765b-3836-4257-aa59-d47c7464c1e5","Type":"ContainerStarted","Data":"e43d40216a243d12ea4d25dd8c14e1bc7ded0d03dc3d648156f994eaaa7517cb"} Feb 19 11:01:02 crc kubenswrapper[4811]: I0219 11:01:02.043089 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29524981-j4dnl" podStartSLOduration=2.043060459 podStartE2EDuration="2.043060459s" podCreationTimestamp="2026-02-19 11:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 11:01:02.035282444 +0000 UTC m=+3150.561747220" watchObservedRunningTime="2026-02-19 11:01:02.043060459 +0000 UTC m=+3150.569525145" Feb 19 11:01:03 crc kubenswrapper[4811]: I0219 11:01:03.584031 4811 scope.go:117] "RemoveContainer" containerID="ecb77dd42fde5f6c3909c44a47ea1c0a181a1a0c7a82b7351079e66b3bbac5f5" Feb 19 11:01:03 crc kubenswrapper[4811]: E0219 11:01:03.584762 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:01:04 crc kubenswrapper[4811]: I0219 11:01:04.038836 4811 generic.go:334] "Generic (PLEG): container finished" podID="58f1765b-3836-4257-aa59-d47c7464c1e5" containerID="94afaf69af5e0cb1f171fa4ec061a03b32809cf24692cf4979e76a45fdad5f64" exitCode=0 Feb 19 11:01:04 crc kubenswrapper[4811]: I0219 11:01:04.038896 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524981-j4dnl" event={"ID":"58f1765b-3836-4257-aa59-d47c7464c1e5","Type":"ContainerDied","Data":"94afaf69af5e0cb1f171fa4ec061a03b32809cf24692cf4979e76a45fdad5f64"} Feb 19 11:01:05 crc kubenswrapper[4811]: I0219 11:01:05.561313 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524981-j4dnl" Feb 19 11:01:05 crc kubenswrapper[4811]: I0219 11:01:05.639027 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58f1765b-3836-4257-aa59-d47c7464c1e5-config-data\") pod \"58f1765b-3836-4257-aa59-d47c7464c1e5\" (UID: \"58f1765b-3836-4257-aa59-d47c7464c1e5\") " Feb 19 11:01:05 crc kubenswrapper[4811]: I0219 11:01:05.639629 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58f1765b-3836-4257-aa59-d47c7464c1e5-combined-ca-bundle\") pod \"58f1765b-3836-4257-aa59-d47c7464c1e5\" (UID: \"58f1765b-3836-4257-aa59-d47c7464c1e5\") " Feb 19 11:01:05 crc kubenswrapper[4811]: I0219 11:01:05.639734 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2f7j6\" (UniqueName: \"kubernetes.io/projected/58f1765b-3836-4257-aa59-d47c7464c1e5-kube-api-access-2f7j6\") pod \"58f1765b-3836-4257-aa59-d47c7464c1e5\" (UID: \"58f1765b-3836-4257-aa59-d47c7464c1e5\") " Feb 19 11:01:05 crc kubenswrapper[4811]: I0219 11:01:05.639960 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/58f1765b-3836-4257-aa59-d47c7464c1e5-fernet-keys\") pod \"58f1765b-3836-4257-aa59-d47c7464c1e5\" (UID: \"58f1765b-3836-4257-aa59-d47c7464c1e5\") " Feb 19 11:01:05 crc kubenswrapper[4811]: I0219 11:01:05.645925 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58f1765b-3836-4257-aa59-d47c7464c1e5-kube-api-access-2f7j6" (OuterVolumeSpecName: "kube-api-access-2f7j6") pod "58f1765b-3836-4257-aa59-d47c7464c1e5" (UID: "58f1765b-3836-4257-aa59-d47c7464c1e5"). InnerVolumeSpecName "kube-api-access-2f7j6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:01:05 crc kubenswrapper[4811]: I0219 11:01:05.646873 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58f1765b-3836-4257-aa59-d47c7464c1e5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "58f1765b-3836-4257-aa59-d47c7464c1e5" (UID: "58f1765b-3836-4257-aa59-d47c7464c1e5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 11:01:05 crc kubenswrapper[4811]: I0219 11:01:05.674716 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58f1765b-3836-4257-aa59-d47c7464c1e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58f1765b-3836-4257-aa59-d47c7464c1e5" (UID: "58f1765b-3836-4257-aa59-d47c7464c1e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 11:01:05 crc kubenswrapper[4811]: I0219 11:01:05.701475 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58f1765b-3836-4257-aa59-d47c7464c1e5-config-data" (OuterVolumeSpecName: "config-data") pod "58f1765b-3836-4257-aa59-d47c7464c1e5" (UID: "58f1765b-3836-4257-aa59-d47c7464c1e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 11:01:05 crc kubenswrapper[4811]: I0219 11:01:05.743758 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58f1765b-3836-4257-aa59-d47c7464c1e5-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 11:01:05 crc kubenswrapper[4811]: I0219 11:01:05.743803 4811 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58f1765b-3836-4257-aa59-d47c7464c1e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 11:01:05 crc kubenswrapper[4811]: I0219 11:01:05.743823 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2f7j6\" (UniqueName: \"kubernetes.io/projected/58f1765b-3836-4257-aa59-d47c7464c1e5-kube-api-access-2f7j6\") on node \"crc\" DevicePath \"\"" Feb 19 11:01:05 crc kubenswrapper[4811]: I0219 11:01:05.743834 4811 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/58f1765b-3836-4257-aa59-d47c7464c1e5-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 11:01:06 crc kubenswrapper[4811]: I0219 11:01:06.064273 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524981-j4dnl" event={"ID":"58f1765b-3836-4257-aa59-d47c7464c1e5","Type":"ContainerDied","Data":"e43d40216a243d12ea4d25dd8c14e1bc7ded0d03dc3d648156f994eaaa7517cb"} Feb 19 11:01:06 crc kubenswrapper[4811]: I0219 11:01:06.064331 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e43d40216a243d12ea4d25dd8c14e1bc7ded0d03dc3d648156f994eaaa7517cb" Feb 19 11:01:06 crc kubenswrapper[4811]: I0219 11:01:06.064357 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524981-j4dnl" Feb 19 11:01:17 crc kubenswrapper[4811]: I0219 11:01:17.583945 4811 scope.go:117] "RemoveContainer" containerID="ecb77dd42fde5f6c3909c44a47ea1c0a181a1a0c7a82b7351079e66b3bbac5f5" Feb 19 11:01:17 crc kubenswrapper[4811]: E0219 11:01:17.584925 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:01:21 crc kubenswrapper[4811]: I0219 11:01:21.938669 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vqq57"] Feb 19 11:01:21 crc kubenswrapper[4811]: E0219 11:01:21.940263 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58f1765b-3836-4257-aa59-d47c7464c1e5" containerName="keystone-cron" Feb 19 11:01:21 crc kubenswrapper[4811]: I0219 11:01:21.940294 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="58f1765b-3836-4257-aa59-d47c7464c1e5" containerName="keystone-cron" Feb 19 11:01:21 crc kubenswrapper[4811]: I0219 11:01:21.940729 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="58f1765b-3836-4257-aa59-d47c7464c1e5" containerName="keystone-cron" Feb 19 11:01:21 crc kubenswrapper[4811]: I0219 11:01:21.960485 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vqq57" Feb 19 11:01:21 crc kubenswrapper[4811]: I0219 11:01:21.966012 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vqq57"] Feb 19 11:01:22 crc kubenswrapper[4811]: I0219 11:01:22.040878 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89wjj\" (UniqueName: \"kubernetes.io/projected/8d5401c2-6033-404b-a16e-51a9e689528d-kube-api-access-89wjj\") pod \"community-operators-vqq57\" (UID: \"8d5401c2-6033-404b-a16e-51a9e689528d\") " pod="openshift-marketplace/community-operators-vqq57" Feb 19 11:01:22 crc kubenswrapper[4811]: I0219 11:01:22.041017 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d5401c2-6033-404b-a16e-51a9e689528d-utilities\") pod \"community-operators-vqq57\" (UID: \"8d5401c2-6033-404b-a16e-51a9e689528d\") " pod="openshift-marketplace/community-operators-vqq57" Feb 19 11:01:22 crc kubenswrapper[4811]: I0219 11:01:22.041090 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d5401c2-6033-404b-a16e-51a9e689528d-catalog-content\") pod \"community-operators-vqq57\" (UID: \"8d5401c2-6033-404b-a16e-51a9e689528d\") " pod="openshift-marketplace/community-operators-vqq57" Feb 19 11:01:22 crc kubenswrapper[4811]: I0219 11:01:22.143248 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89wjj\" (UniqueName: \"kubernetes.io/projected/8d5401c2-6033-404b-a16e-51a9e689528d-kube-api-access-89wjj\") pod \"community-operators-vqq57\" (UID: \"8d5401c2-6033-404b-a16e-51a9e689528d\") " pod="openshift-marketplace/community-operators-vqq57" Feb 19 11:01:22 crc kubenswrapper[4811]: I0219 11:01:22.143396 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d5401c2-6033-404b-a16e-51a9e689528d-utilities\") pod \"community-operators-vqq57\" (UID: \"8d5401c2-6033-404b-a16e-51a9e689528d\") " pod="openshift-marketplace/community-operators-vqq57" Feb 19 11:01:22 crc kubenswrapper[4811]: I0219 11:01:22.143495 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d5401c2-6033-404b-a16e-51a9e689528d-catalog-content\") pod \"community-operators-vqq57\" (UID: \"8d5401c2-6033-404b-a16e-51a9e689528d\") " pod="openshift-marketplace/community-operators-vqq57" Feb 19 11:01:22 crc kubenswrapper[4811]: I0219 11:01:22.143952 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d5401c2-6033-404b-a16e-51a9e689528d-utilities\") pod \"community-operators-vqq57\" (UID: \"8d5401c2-6033-404b-a16e-51a9e689528d\") " pod="openshift-marketplace/community-operators-vqq57" Feb 19 11:01:22 crc kubenswrapper[4811]: I0219 11:01:22.144074 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d5401c2-6033-404b-a16e-51a9e689528d-catalog-content\") pod \"community-operators-vqq57\" (UID: \"8d5401c2-6033-404b-a16e-51a9e689528d\") " pod="openshift-marketplace/community-operators-vqq57" Feb 19 11:01:22 crc kubenswrapper[4811]: I0219 11:01:22.169382 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89wjj\" (UniqueName: \"kubernetes.io/projected/8d5401c2-6033-404b-a16e-51a9e689528d-kube-api-access-89wjj\") pod \"community-operators-vqq57\" (UID: \"8d5401c2-6033-404b-a16e-51a9e689528d\") " pod="openshift-marketplace/community-operators-vqq57" Feb 19 11:01:22 crc kubenswrapper[4811]: I0219 11:01:22.293274 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vqq57" Feb 19 11:01:22 crc kubenswrapper[4811]: I0219 11:01:22.890121 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vqq57"] Feb 19 11:01:23 crc kubenswrapper[4811]: I0219 11:01:23.228672 4811 generic.go:334] "Generic (PLEG): container finished" podID="8d5401c2-6033-404b-a16e-51a9e689528d" containerID="c77a2465c33785233277f01e2123565aa5ccdbd51246df7affd1d7226af2725e" exitCode=0 Feb 19 11:01:23 crc kubenswrapper[4811]: I0219 11:01:23.228750 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vqq57" event={"ID":"8d5401c2-6033-404b-a16e-51a9e689528d","Type":"ContainerDied","Data":"c77a2465c33785233277f01e2123565aa5ccdbd51246df7affd1d7226af2725e"} Feb 19 11:01:23 crc kubenswrapper[4811]: I0219 11:01:23.228791 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vqq57" event={"ID":"8d5401c2-6033-404b-a16e-51a9e689528d","Type":"ContainerStarted","Data":"5d74e0ccda9f686f030f1450b36638e0c4e0797e9e92abbabfe64da94a3dc4db"} Feb 19 11:01:24 crc kubenswrapper[4811]: I0219 11:01:24.240346 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vqq57" event={"ID":"8d5401c2-6033-404b-a16e-51a9e689528d","Type":"ContainerStarted","Data":"a0123d5700d7dafddb52b64fd772cf11cd8386902412ed68d3b55d68e6726c95"} Feb 19 11:01:25 crc kubenswrapper[4811]: I0219 11:01:25.256938 4811 generic.go:334] "Generic (PLEG): container finished" podID="8d5401c2-6033-404b-a16e-51a9e689528d" containerID="a0123d5700d7dafddb52b64fd772cf11cd8386902412ed68d3b55d68e6726c95" exitCode=0 Feb 19 11:01:25 crc kubenswrapper[4811]: I0219 11:01:25.257053 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vqq57" event={"ID":"8d5401c2-6033-404b-a16e-51a9e689528d","Type":"ContainerDied","Data":"a0123d5700d7dafddb52b64fd772cf11cd8386902412ed68d3b55d68e6726c95"} Feb 19 11:01:26 crc kubenswrapper[4811]: I0219 11:01:26.274285 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vqq57" event={"ID":"8d5401c2-6033-404b-a16e-51a9e689528d","Type":"ContainerStarted","Data":"56a9d1a9bf8d3f63709148e7d32ef2b1eea3edbe35c2329bd5392929f30a1a8d"} Feb 19 11:01:26 crc kubenswrapper[4811]: I0219 11:01:26.313269 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vqq57" podStartSLOduration=2.860594365 podStartE2EDuration="5.313241501s" podCreationTimestamp="2026-02-19 11:01:21 +0000 UTC" firstStartedPulling="2026-02-19 11:01:23.231626374 +0000 UTC m=+3171.758091060" lastFinishedPulling="2026-02-19 11:01:25.68427351 +0000 UTC m=+3174.210738196" observedRunningTime="2026-02-19 11:01:26.307837635 +0000 UTC m=+3174.834302341" watchObservedRunningTime="2026-02-19 11:01:26.313241501 +0000 UTC m=+3174.839706177" Feb 19 11:01:28 crc kubenswrapper[4811]: I0219 11:01:28.583368 4811 scope.go:117] "RemoveContainer" containerID="ecb77dd42fde5f6c3909c44a47ea1c0a181a1a0c7a82b7351079e66b3bbac5f5" Feb 19 11:01:28 crc kubenswrapper[4811]: E0219 11:01:28.584011 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:01:32 crc kubenswrapper[4811]: I0219 11:01:32.295119 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vqq57" Feb 19 11:01:32 crc kubenswrapper[4811]: I0219 11:01:32.295776 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vqq57" Feb 19 11:01:32 crc kubenswrapper[4811]: I0219 11:01:32.352146 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vqq57" Feb 19 11:01:32 crc kubenswrapper[4811]: I0219 11:01:32.411339 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vqq57" Feb 19 11:01:32 crc kubenswrapper[4811]: I0219 11:01:32.603207 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vqq57"] Feb 19 11:01:34 crc kubenswrapper[4811]: I0219 11:01:34.350221 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vqq57" podUID="8d5401c2-6033-404b-a16e-51a9e689528d" containerName="registry-server" containerID="cri-o://56a9d1a9bf8d3f63709148e7d32ef2b1eea3edbe35c2329bd5392929f30a1a8d" gracePeriod=2 Feb 19 11:01:34 crc kubenswrapper[4811]: I0219 11:01:34.905880 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vqq57" Feb 19 11:01:35 crc kubenswrapper[4811]: I0219 11:01:35.034022 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d5401c2-6033-404b-a16e-51a9e689528d-catalog-content\") pod \"8d5401c2-6033-404b-a16e-51a9e689528d\" (UID: \"8d5401c2-6033-404b-a16e-51a9e689528d\") " Feb 19 11:01:35 crc kubenswrapper[4811]: I0219 11:01:35.034374 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89wjj\" (UniqueName: \"kubernetes.io/projected/8d5401c2-6033-404b-a16e-51a9e689528d-kube-api-access-89wjj\") pod \"8d5401c2-6033-404b-a16e-51a9e689528d\" (UID: \"8d5401c2-6033-404b-a16e-51a9e689528d\") " Feb 19 11:01:35 crc kubenswrapper[4811]: I0219 11:01:35.034425 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d5401c2-6033-404b-a16e-51a9e689528d-utilities\") pod \"8d5401c2-6033-404b-a16e-51a9e689528d\" (UID: \"8d5401c2-6033-404b-a16e-51a9e689528d\") " Feb 19 11:01:35 crc kubenswrapper[4811]: I0219 11:01:35.035472 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d5401c2-6033-404b-a16e-51a9e689528d-utilities" (OuterVolumeSpecName: "utilities") pod "8d5401c2-6033-404b-a16e-51a9e689528d" (UID: "8d5401c2-6033-404b-a16e-51a9e689528d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:01:35 crc kubenswrapper[4811]: I0219 11:01:35.040570 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d5401c2-6033-404b-a16e-51a9e689528d-kube-api-access-89wjj" (OuterVolumeSpecName: "kube-api-access-89wjj") pod "8d5401c2-6033-404b-a16e-51a9e689528d" (UID: "8d5401c2-6033-404b-a16e-51a9e689528d"). InnerVolumeSpecName "kube-api-access-89wjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:01:35 crc kubenswrapper[4811]: I0219 11:01:35.091895 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d5401c2-6033-404b-a16e-51a9e689528d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d5401c2-6033-404b-a16e-51a9e689528d" (UID: "8d5401c2-6033-404b-a16e-51a9e689528d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:01:35 crc kubenswrapper[4811]: I0219 11:01:35.137738 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89wjj\" (UniqueName: \"kubernetes.io/projected/8d5401c2-6033-404b-a16e-51a9e689528d-kube-api-access-89wjj\") on node \"crc\" DevicePath \"\"" Feb 19 11:01:35 crc kubenswrapper[4811]: I0219 11:01:35.137798 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d5401c2-6033-404b-a16e-51a9e689528d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 11:01:35 crc kubenswrapper[4811]: I0219 11:01:35.137812 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d5401c2-6033-404b-a16e-51a9e689528d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 11:01:35 crc kubenswrapper[4811]: I0219 11:01:35.361380 4811 generic.go:334] "Generic (PLEG): container finished" podID="8d5401c2-6033-404b-a16e-51a9e689528d" containerID="56a9d1a9bf8d3f63709148e7d32ef2b1eea3edbe35c2329bd5392929f30a1a8d" exitCode=0 Feb 19 11:01:35 crc kubenswrapper[4811]: I0219 11:01:35.361503 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vqq57" event={"ID":"8d5401c2-6033-404b-a16e-51a9e689528d","Type":"ContainerDied","Data":"56a9d1a9bf8d3f63709148e7d32ef2b1eea3edbe35c2329bd5392929f30a1a8d"} Feb 19 11:01:35 crc kubenswrapper[4811]: I0219 11:01:35.361807 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vqq57" event={"ID":"8d5401c2-6033-404b-a16e-51a9e689528d","Type":"ContainerDied","Data":"5d74e0ccda9f686f030f1450b36638e0c4e0797e9e92abbabfe64da94a3dc4db"} Feb 19 11:01:35 crc kubenswrapper[4811]: I0219 11:01:35.361842 4811 scope.go:117] "RemoveContainer" containerID="56a9d1a9bf8d3f63709148e7d32ef2b1eea3edbe35c2329bd5392929f30a1a8d" Feb 19 11:01:35 crc kubenswrapper[4811]: I0219 11:01:35.361553 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vqq57" Feb 19 11:01:35 crc kubenswrapper[4811]: I0219 11:01:35.385066 4811 scope.go:117] "RemoveContainer" containerID="a0123d5700d7dafddb52b64fd772cf11cd8386902412ed68d3b55d68e6726c95" Feb 19 11:01:35 crc kubenswrapper[4811]: I0219 11:01:35.397175 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vqq57"] Feb 19 11:01:35 crc kubenswrapper[4811]: I0219 11:01:35.406823 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vqq57"] Feb 19 11:01:35 crc kubenswrapper[4811]: I0219 11:01:35.430840 4811 scope.go:117] "RemoveContainer" containerID="c77a2465c33785233277f01e2123565aa5ccdbd51246df7affd1d7226af2725e" Feb 19 11:01:35 crc kubenswrapper[4811]: I0219 11:01:35.470467 4811 scope.go:117] "RemoveContainer" containerID="56a9d1a9bf8d3f63709148e7d32ef2b1eea3edbe35c2329bd5392929f30a1a8d" Feb 19 11:01:35 crc kubenswrapper[4811]: E0219 11:01:35.471324 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56a9d1a9bf8d3f63709148e7d32ef2b1eea3edbe35c2329bd5392929f30a1a8d\": container with ID starting with 56a9d1a9bf8d3f63709148e7d32ef2b1eea3edbe35c2329bd5392929f30a1a8d not found: ID does not exist" containerID="56a9d1a9bf8d3f63709148e7d32ef2b1eea3edbe35c2329bd5392929f30a1a8d" Feb 19 11:01:35 crc kubenswrapper[4811]: I0219 11:01:35.471393 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56a9d1a9bf8d3f63709148e7d32ef2b1eea3edbe35c2329bd5392929f30a1a8d"} err="failed to get container status \"56a9d1a9bf8d3f63709148e7d32ef2b1eea3edbe35c2329bd5392929f30a1a8d\": rpc error: code = NotFound desc = could not find container \"56a9d1a9bf8d3f63709148e7d32ef2b1eea3edbe35c2329bd5392929f30a1a8d\": container with ID starting with 56a9d1a9bf8d3f63709148e7d32ef2b1eea3edbe35c2329bd5392929f30a1a8d not found: ID does not exist" Feb 19 11:01:35 crc kubenswrapper[4811]: I0219 11:01:35.471435 4811 scope.go:117] "RemoveContainer" containerID="a0123d5700d7dafddb52b64fd772cf11cd8386902412ed68d3b55d68e6726c95" Feb 19 11:01:35 crc kubenswrapper[4811]: E0219 11:01:35.471973 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0123d5700d7dafddb52b64fd772cf11cd8386902412ed68d3b55d68e6726c95\": container with ID starting with a0123d5700d7dafddb52b64fd772cf11cd8386902412ed68d3b55d68e6726c95 not found: ID does not exist" containerID="a0123d5700d7dafddb52b64fd772cf11cd8386902412ed68d3b55d68e6726c95" Feb 19 11:01:35 crc kubenswrapper[4811]: I0219 11:01:35.472044 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0123d5700d7dafddb52b64fd772cf11cd8386902412ed68d3b55d68e6726c95"} err="failed to get container status \"a0123d5700d7dafddb52b64fd772cf11cd8386902412ed68d3b55d68e6726c95\": rpc error: code = NotFound desc = could not find container \"a0123d5700d7dafddb52b64fd772cf11cd8386902412ed68d3b55d68e6726c95\": container with ID starting with a0123d5700d7dafddb52b64fd772cf11cd8386902412ed68d3b55d68e6726c95 not found: ID does not exist" Feb 19 11:01:35 crc kubenswrapper[4811]: I0219 11:01:35.472084 4811 scope.go:117] "RemoveContainer" containerID="c77a2465c33785233277f01e2123565aa5ccdbd51246df7affd1d7226af2725e" Feb 19 11:01:35 crc kubenswrapper[4811]: E0219 11:01:35.472782 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c77a2465c33785233277f01e2123565aa5ccdbd51246df7affd1d7226af2725e\": container with ID starting with c77a2465c33785233277f01e2123565aa5ccdbd51246df7affd1d7226af2725e not found: ID does not exist" containerID="c77a2465c33785233277f01e2123565aa5ccdbd51246df7affd1d7226af2725e" Feb 19 11:01:35 crc kubenswrapper[4811]: I0219 11:01:35.472819 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c77a2465c33785233277f01e2123565aa5ccdbd51246df7affd1d7226af2725e"} err="failed to get container status \"c77a2465c33785233277f01e2123565aa5ccdbd51246df7affd1d7226af2725e\": rpc error: code = NotFound desc = could not find container \"c77a2465c33785233277f01e2123565aa5ccdbd51246df7affd1d7226af2725e\": container with ID starting with c77a2465c33785233277f01e2123565aa5ccdbd51246df7affd1d7226af2725e not found: ID does not exist" Feb 19 11:01:36 crc kubenswrapper[4811]: I0219 11:01:36.595894 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d5401c2-6033-404b-a16e-51a9e689528d" path="/var/lib/kubelet/pods/8d5401c2-6033-404b-a16e-51a9e689528d/volumes" Feb 19 11:01:42 crc kubenswrapper[4811]: I0219 11:01:42.601102 4811 scope.go:117] "RemoveContainer" containerID="ecb77dd42fde5f6c3909c44a47ea1c0a181a1a0c7a82b7351079e66b3bbac5f5" Feb 19 11:01:42 crc kubenswrapper[4811]: E0219 11:01:42.603461 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:01:57 crc kubenswrapper[4811]: I0219 11:01:57.584354 4811 scope.go:117] "RemoveContainer" containerID="ecb77dd42fde5f6c3909c44a47ea1c0a181a1a0c7a82b7351079e66b3bbac5f5" Feb 19 11:01:57 crc kubenswrapper[4811]: E0219 11:01:57.585188 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:01:57 crc kubenswrapper[4811]: I0219 11:01:57.625140 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wbd46"] Feb 19 11:01:57 crc kubenswrapper[4811]: E0219 11:01:57.625632 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d5401c2-6033-404b-a16e-51a9e689528d" containerName="extract-utilities" Feb 19 11:01:57 crc kubenswrapper[4811]: I0219 11:01:57.625649 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d5401c2-6033-404b-a16e-51a9e689528d" containerName="extract-utilities" Feb 19 11:01:57 crc kubenswrapper[4811]: E0219 11:01:57.625670 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d5401c2-6033-404b-a16e-51a9e689528d" containerName="registry-server" Feb 19 11:01:57 crc kubenswrapper[4811]: I0219 11:01:57.625677 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d5401c2-6033-404b-a16e-51a9e689528d" containerName="registry-server" Feb 19 11:01:57 crc kubenswrapper[4811]: E0219 11:01:57.625715 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d5401c2-6033-404b-a16e-51a9e689528d" containerName="extract-content" Feb 19 11:01:57 crc kubenswrapper[4811]: I0219 11:01:57.625722 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d5401c2-6033-404b-a16e-51a9e689528d" containerName="extract-content" Feb 19 11:01:57 crc kubenswrapper[4811]: I0219 11:01:57.625959 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d5401c2-6033-404b-a16e-51a9e689528d" containerName="registry-server" Feb 19 11:01:57 crc kubenswrapper[4811]: I0219 11:01:57.627567 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wbd46" Feb 19 11:01:57 crc kubenswrapper[4811]: I0219 11:01:57.652086 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wbd46"] Feb 19 11:01:57 crc kubenswrapper[4811]: I0219 11:01:57.769163 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3de5ea69-4037-4eaa-97cc-f54490a984b2-utilities\") pod \"redhat-marketplace-wbd46\" (UID: \"3de5ea69-4037-4eaa-97cc-f54490a984b2\") " pod="openshift-marketplace/redhat-marketplace-wbd46" Feb 19 11:01:57 crc kubenswrapper[4811]: I0219 11:01:57.769639 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxd2t\" (UniqueName: \"kubernetes.io/projected/3de5ea69-4037-4eaa-97cc-f54490a984b2-kube-api-access-sxd2t\") pod \"redhat-marketplace-wbd46\" (UID: \"3de5ea69-4037-4eaa-97cc-f54490a984b2\") " pod="openshift-marketplace/redhat-marketplace-wbd46" Feb 19 11:01:57 crc kubenswrapper[4811]: I0219 11:01:57.769804 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3de5ea69-4037-4eaa-97cc-f54490a984b2-catalog-content\") pod \"redhat-marketplace-wbd46\" (UID: \"3de5ea69-4037-4eaa-97cc-f54490a984b2\") " pod="openshift-marketplace/redhat-marketplace-wbd46" Feb 19 11:01:57 crc kubenswrapper[4811]: I0219 11:01:57.871581 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxd2t\" (UniqueName: \"kubernetes.io/projected/3de5ea69-4037-4eaa-97cc-f54490a984b2-kube-api-access-sxd2t\") pod \"redhat-marketplace-wbd46\" (UID: \"3de5ea69-4037-4eaa-97cc-f54490a984b2\") " pod="openshift-marketplace/redhat-marketplace-wbd46" Feb 19 11:01:57 crc kubenswrapper[4811]: I0219 11:01:57.872200 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3de5ea69-4037-4eaa-97cc-f54490a984b2-catalog-content\") pod \"redhat-marketplace-wbd46\" (UID: \"3de5ea69-4037-4eaa-97cc-f54490a984b2\") " pod="openshift-marketplace/redhat-marketplace-wbd46" Feb 19 11:01:57 crc kubenswrapper[4811]: I0219 11:01:57.872398 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3de5ea69-4037-4eaa-97cc-f54490a984b2-utilities\") pod \"redhat-marketplace-wbd46\" (UID: \"3de5ea69-4037-4eaa-97cc-f54490a984b2\") " pod="openshift-marketplace/redhat-marketplace-wbd46" Feb 19 11:01:57 crc kubenswrapper[4811]: I0219 11:01:57.872968 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3de5ea69-4037-4eaa-97cc-f54490a984b2-utilities\") pod \"redhat-marketplace-wbd46\" (UID: \"3de5ea69-4037-4eaa-97cc-f54490a984b2\") " pod="openshift-marketplace/redhat-marketplace-wbd46" Feb 19 11:01:57 crc kubenswrapper[4811]: I0219 11:01:57.873267 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3de5ea69-4037-4eaa-97cc-f54490a984b2-catalog-content\") pod \"redhat-marketplace-wbd46\" (UID: \"3de5ea69-4037-4eaa-97cc-f54490a984b2\") " pod="openshift-marketplace/redhat-marketplace-wbd46" Feb 19 11:01:57 crc kubenswrapper[4811]: I0219 11:01:57.902097 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxd2t\" (UniqueName: \"kubernetes.io/projected/3de5ea69-4037-4eaa-97cc-f54490a984b2-kube-api-access-sxd2t\") pod \"redhat-marketplace-wbd46\" (UID: \"3de5ea69-4037-4eaa-97cc-f54490a984b2\") " pod="openshift-marketplace/redhat-marketplace-wbd46" Feb 19 11:01:57 crc kubenswrapper[4811]: I0219 11:01:57.954210 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wbd46" Feb 19 11:01:58 crc kubenswrapper[4811]: I0219 11:01:58.482977 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wbd46"] Feb 19 11:01:58 crc kubenswrapper[4811]: I0219 11:01:58.617865 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wbd46" event={"ID":"3de5ea69-4037-4eaa-97cc-f54490a984b2","Type":"ContainerStarted","Data":"91eb191b09318d0ee1d3589c897978c36411a092a753ea841f110660cdb3045a"} Feb 19 11:01:59 crc kubenswrapper[4811]: I0219 11:01:59.630232 4811 generic.go:334] "Generic (PLEG): container finished" podID="3de5ea69-4037-4eaa-97cc-f54490a984b2" containerID="92c6d62df68c7b698436050054fe852e09e217e0e865d7dc2cfc64e541aee022" exitCode=0 Feb 19 11:01:59 crc kubenswrapper[4811]: I0219 11:01:59.630349 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wbd46" event={"ID":"3de5ea69-4037-4eaa-97cc-f54490a984b2","Type":"ContainerDied","Data":"92c6d62df68c7b698436050054fe852e09e217e0e865d7dc2cfc64e541aee022"} Feb 19 11:01:59 crc kubenswrapper[4811]: I0219 11:01:59.635761 4811 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 11:02:00 crc kubenswrapper[4811]: I0219 11:02:00.648033 4811 generic.go:334] "Generic (PLEG): container finished" podID="3de5ea69-4037-4eaa-97cc-f54490a984b2" containerID="0c095201db1cf7e0a034c1a0cfc6be21c02ca631ee1703505cccc5caeaa33462" exitCode=0 Feb 19 11:02:00 crc kubenswrapper[4811]: I0219 11:02:00.648320 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wbd46" event={"ID":"3de5ea69-4037-4eaa-97cc-f54490a984b2","Type":"ContainerDied","Data":"0c095201db1cf7e0a034c1a0cfc6be21c02ca631ee1703505cccc5caeaa33462"} Feb 19 11:02:01 crc kubenswrapper[4811]: I0219 11:02:01.660403 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wbd46" event={"ID":"3de5ea69-4037-4eaa-97cc-f54490a984b2","Type":"ContainerStarted","Data":"a2fd97b8aa1aadcc2ce9f33b27250b82e49520174311c67acfe7e53af6b018ab"} Feb 19 11:02:01 crc kubenswrapper[4811]: I0219 11:02:01.690906 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wbd46" podStartSLOduration=3.185556286 podStartE2EDuration="4.690873023s" podCreationTimestamp="2026-02-19 11:01:57 +0000 UTC" firstStartedPulling="2026-02-19 11:01:59.635380602 +0000 UTC m=+3208.161845288" lastFinishedPulling="2026-02-19 11:02:01.140697339 +0000 UTC m=+3209.667162025" observedRunningTime="2026-02-19 11:02:01.682529283 +0000 UTC m=+3210.208993979" watchObservedRunningTime="2026-02-19 11:02:01.690873023 +0000 UTC m=+3210.217337709" Feb 19 11:02:07 crc kubenswrapper[4811]: I0219 11:02:07.954515 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wbd46" Feb 19 11:02:07 crc kubenswrapper[4811]: I0219 11:02:07.955119 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wbd46" Feb 19 11:02:08 crc kubenswrapper[4811]: I0219 11:02:08.001493 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wbd46" Feb 19 11:02:08 crc kubenswrapper[4811]: I0219 11:02:08.775647 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wbd46" Feb 19 11:02:08 crc kubenswrapper[4811]: I0219 11:02:08.831165 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wbd46"] Feb 19 11:02:10 crc kubenswrapper[4811]: I0219 11:02:10.744052 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wbd46" podUID="3de5ea69-4037-4eaa-97cc-f54490a984b2" containerName="registry-server" containerID="cri-o://a2fd97b8aa1aadcc2ce9f33b27250b82e49520174311c67acfe7e53af6b018ab" gracePeriod=2 Feb 19 11:02:11 crc kubenswrapper[4811]: I0219 11:02:11.798742 4811 generic.go:334] "Generic (PLEG): container finished" podID="3de5ea69-4037-4eaa-97cc-f54490a984b2" containerID="a2fd97b8aa1aadcc2ce9f33b27250b82e49520174311c67acfe7e53af6b018ab" exitCode=0 Feb 19 11:02:11 crc kubenswrapper[4811]: I0219 11:02:11.798832 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wbd46" event={"ID":"3de5ea69-4037-4eaa-97cc-f54490a984b2","Type":"ContainerDied","Data":"a2fd97b8aa1aadcc2ce9f33b27250b82e49520174311c67acfe7e53af6b018ab"} Feb 19 11:02:12 crc kubenswrapper[4811]: I0219 11:02:12.006294 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wbd46" Feb 19 11:02:12 crc kubenswrapper[4811]: I0219 11:02:12.107680 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3de5ea69-4037-4eaa-97cc-f54490a984b2-utilities\") pod \"3de5ea69-4037-4eaa-97cc-f54490a984b2\" (UID: \"3de5ea69-4037-4eaa-97cc-f54490a984b2\") " Feb 19 11:02:12 crc kubenswrapper[4811]: I0219 11:02:12.107994 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3de5ea69-4037-4eaa-97cc-f54490a984b2-catalog-content\") pod \"3de5ea69-4037-4eaa-97cc-f54490a984b2\" (UID: \"3de5ea69-4037-4eaa-97cc-f54490a984b2\") " Feb 19 11:02:12 crc kubenswrapper[4811]: I0219 11:02:12.108155 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxd2t\" (UniqueName: \"kubernetes.io/projected/3de5ea69-4037-4eaa-97cc-f54490a984b2-kube-api-access-sxd2t\") pod \"3de5ea69-4037-4eaa-97cc-f54490a984b2\" (UID: \"3de5ea69-4037-4eaa-97cc-f54490a984b2\") " Feb 19 11:02:12 crc kubenswrapper[4811]: I0219 11:02:12.109367 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3de5ea69-4037-4eaa-97cc-f54490a984b2-utilities" (OuterVolumeSpecName: "utilities") pod "3de5ea69-4037-4eaa-97cc-f54490a984b2" (UID: "3de5ea69-4037-4eaa-97cc-f54490a984b2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:02:12 crc kubenswrapper[4811]: I0219 11:02:12.114530 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3de5ea69-4037-4eaa-97cc-f54490a984b2-kube-api-access-sxd2t" (OuterVolumeSpecName: "kube-api-access-sxd2t") pod "3de5ea69-4037-4eaa-97cc-f54490a984b2" (UID: "3de5ea69-4037-4eaa-97cc-f54490a984b2"). InnerVolumeSpecName "kube-api-access-sxd2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:02:12 crc kubenswrapper[4811]: I0219 11:02:12.133914 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3de5ea69-4037-4eaa-97cc-f54490a984b2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3de5ea69-4037-4eaa-97cc-f54490a984b2" (UID: "3de5ea69-4037-4eaa-97cc-f54490a984b2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:02:12 crc kubenswrapper[4811]: I0219 11:02:12.212153 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3de5ea69-4037-4eaa-97cc-f54490a984b2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 11:02:12 crc kubenswrapper[4811]: I0219 11:02:12.212215 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxd2t\" (UniqueName: \"kubernetes.io/projected/3de5ea69-4037-4eaa-97cc-f54490a984b2-kube-api-access-sxd2t\") on node \"crc\" DevicePath \"\"" Feb 19 11:02:12 crc kubenswrapper[4811]: I0219 11:02:12.212235 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3de5ea69-4037-4eaa-97cc-f54490a984b2-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 11:02:12 crc kubenswrapper[4811]: I0219 11:02:12.591586 4811 scope.go:117] "RemoveContainer" containerID="ecb77dd42fde5f6c3909c44a47ea1c0a181a1a0c7a82b7351079e66b3bbac5f5" Feb 19 11:02:12 crc kubenswrapper[4811]: E0219 11:02:12.591902 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:02:12 crc kubenswrapper[4811]: I0219 11:02:12.811669 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wbd46" event={"ID":"3de5ea69-4037-4eaa-97cc-f54490a984b2","Type":"ContainerDied","Data":"91eb191b09318d0ee1d3589c897978c36411a092a753ea841f110660cdb3045a"} Feb 19 11:02:12 crc kubenswrapper[4811]: I0219 11:02:12.811745 4811 scope.go:117] "RemoveContainer" containerID="a2fd97b8aa1aadcc2ce9f33b27250b82e49520174311c67acfe7e53af6b018ab" Feb 19 11:02:12 crc kubenswrapper[4811]: I0219 11:02:12.811921 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wbd46" Feb 19 11:02:12 crc kubenswrapper[4811]: I0219 11:02:12.843176 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wbd46"] Feb 19 11:02:12 crc kubenswrapper[4811]: I0219 11:02:12.847888 4811 scope.go:117] "RemoveContainer" containerID="0c095201db1cf7e0a034c1a0cfc6be21c02ca631ee1703505cccc5caeaa33462" Feb 19 11:02:12 crc kubenswrapper[4811]: I0219 11:02:12.854225 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wbd46"] Feb 19 11:02:12 crc kubenswrapper[4811]: I0219 11:02:12.893785 4811 scope.go:117] "RemoveContainer" containerID="92c6d62df68c7b698436050054fe852e09e217e0e865d7dc2cfc64e541aee022" Feb 19 11:02:14 crc kubenswrapper[4811]: I0219 11:02:14.597538 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3de5ea69-4037-4eaa-97cc-f54490a984b2" path="/var/lib/kubelet/pods/3de5ea69-4037-4eaa-97cc-f54490a984b2/volumes" Feb 19 11:02:27 crc kubenswrapper[4811]: I0219 11:02:27.583685 4811 scope.go:117] "RemoveContainer" containerID="ecb77dd42fde5f6c3909c44a47ea1c0a181a1a0c7a82b7351079e66b3bbac5f5" Feb 19 11:02:27 crc kubenswrapper[4811]: E0219 11:02:27.584930 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:02:41 crc kubenswrapper[4811]: I0219 11:02:41.584305 4811 scope.go:117] "RemoveContainer" containerID="ecb77dd42fde5f6c3909c44a47ea1c0a181a1a0c7a82b7351079e66b3bbac5f5" Feb 19 11:02:41 crc kubenswrapper[4811]: E0219 11:02:41.585494 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:02:55 crc kubenswrapper[4811]: I0219 11:02:55.583058 4811 scope.go:117] "RemoveContainer" containerID="ecb77dd42fde5f6c3909c44a47ea1c0a181a1a0c7a82b7351079e66b3bbac5f5" Feb 19 11:02:55 crc kubenswrapper[4811]: E0219 11:02:55.583922 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:03:09 crc kubenswrapper[4811]: I0219 11:03:09.583394 4811 scope.go:117] "RemoveContainer" containerID="ecb77dd42fde5f6c3909c44a47ea1c0a181a1a0c7a82b7351079e66b3bbac5f5" Feb 19 11:03:09 crc kubenswrapper[4811]: E0219 11:03:09.585862 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:03:24 crc kubenswrapper[4811]: I0219 11:03:24.583991 4811 scope.go:117] "RemoveContainer" containerID="ecb77dd42fde5f6c3909c44a47ea1c0a181a1a0c7a82b7351079e66b3bbac5f5" Feb 19 11:03:24 crc kubenswrapper[4811]: E0219 11:03:24.584651 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:03:39 crc kubenswrapper[4811]: I0219 11:03:39.584299 4811 scope.go:117] "RemoveContainer" containerID="ecb77dd42fde5f6c3909c44a47ea1c0a181a1a0c7a82b7351079e66b3bbac5f5" Feb 19 11:03:39 crc kubenswrapper[4811]: E0219 11:03:39.585319 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:03:50 crc kubenswrapper[4811]: I0219 11:03:50.584076 4811 scope.go:117] "RemoveContainer" containerID="ecb77dd42fde5f6c3909c44a47ea1c0a181a1a0c7a82b7351079e66b3bbac5f5" Feb 19 11:03:50 crc kubenswrapper[4811]: E0219 11:03:50.584917 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:04:02 crc kubenswrapper[4811]: I0219 11:04:02.591582 4811 scope.go:117] "RemoveContainer" containerID="ecb77dd42fde5f6c3909c44a47ea1c0a181a1a0c7a82b7351079e66b3bbac5f5" Feb 19 11:04:02 crc kubenswrapper[4811]: E0219 11:04:02.592264 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:04:16 crc kubenswrapper[4811]: I0219 11:04:16.583803 4811 scope.go:117] "RemoveContainer" containerID="ecb77dd42fde5f6c3909c44a47ea1c0a181a1a0c7a82b7351079e66b3bbac5f5" Feb 19 11:04:16 crc kubenswrapper[4811]: E0219 11:04:16.584607 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:04:28 crc kubenswrapper[4811]: I0219 11:04:28.583924 4811 scope.go:117] "RemoveContainer" containerID="ecb77dd42fde5f6c3909c44a47ea1c0a181a1a0c7a82b7351079e66b3bbac5f5" Feb 19 11:04:28 crc kubenswrapper[4811]: E0219 11:04:28.584706 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:04:43 crc kubenswrapper[4811]: I0219 11:04:43.583634 4811 scope.go:117] "RemoveContainer" containerID="ecb77dd42fde5f6c3909c44a47ea1c0a181a1a0c7a82b7351079e66b3bbac5f5" Feb 19 11:04:43 crc kubenswrapper[4811]: E0219 11:04:43.584595 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:04:57 crc kubenswrapper[4811]: I0219 11:04:57.585886 4811 scope.go:117] "RemoveContainer" containerID="ecb77dd42fde5f6c3909c44a47ea1c0a181a1a0c7a82b7351079e66b3bbac5f5" Feb 19 11:04:57 crc kubenswrapper[4811]: E0219 11:04:57.586650 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:05:08 crc kubenswrapper[4811]: I0219 11:05:08.583874 4811 scope.go:117] "RemoveContainer" containerID="ecb77dd42fde5f6c3909c44a47ea1c0a181a1a0c7a82b7351079e66b3bbac5f5" Feb 19 11:05:08 crc kubenswrapper[4811]: E0219 11:05:08.585343 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:05:20 crc kubenswrapper[4811]: I0219 11:05:20.666876 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"4e965f02-705b-4631-879f-27070f0bfa70","Type":"ContainerDied","Data":"c046ea650308c44ae01ed4794560e05bd11486240cfebf3d4faaa5681052d459"} Feb 19 11:05:20 crc kubenswrapper[4811]: I0219 11:05:20.666768 4811 generic.go:334] "Generic (PLEG): container finished" podID="4e965f02-705b-4631-879f-27070f0bfa70" containerID="c046ea650308c44ae01ed4794560e05bd11486240cfebf3d4faaa5681052d459" exitCode=0 Feb 19 11:05:22 crc kubenswrapper[4811]: I0219 11:05:22.112566 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 19 11:05:22 crc kubenswrapper[4811]: I0219 11:05:22.257036 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ts7rf\" (UniqueName: \"kubernetes.io/projected/4e965f02-705b-4631-879f-27070f0bfa70-kube-api-access-ts7rf\") pod \"4e965f02-705b-4631-879f-27070f0bfa70\" (UID: \"4e965f02-705b-4631-879f-27070f0bfa70\") " Feb 19 11:05:22 crc kubenswrapper[4811]: I0219 11:05:22.257115 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4e965f02-705b-4631-879f-27070f0bfa70-openstack-config-secret\") pod \"4e965f02-705b-4631-879f-27070f0bfa70\" (UID: \"4e965f02-705b-4631-879f-27070f0bfa70\") " Feb 19 11:05:22 crc kubenswrapper[4811]: I0219 11:05:22.257227 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4e965f02-705b-4631-879f-27070f0bfa70-ssh-key\") pod \"4e965f02-705b-4631-879f-27070f0bfa70\" (UID: \"4e965f02-705b-4631-879f-27070f0bfa70\") " Feb 19 11:05:22 crc kubenswrapper[4811]: I0219 11:05:22.257263 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4e965f02-705b-4631-879f-27070f0bfa70-ca-certs\") pod \"4e965f02-705b-4631-879f-27070f0bfa70\" (UID: \"4e965f02-705b-4631-879f-27070f0bfa70\") " Feb 19 11:05:22 crc kubenswrapper[4811]: I0219 11:05:22.257331 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"4e965f02-705b-4631-879f-27070f0bfa70\" (UID: \"4e965f02-705b-4631-879f-27070f0bfa70\") " Feb 19 11:05:22 crc kubenswrapper[4811]: I0219 11:05:22.257396 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e965f02-705b-4631-879f-27070f0bfa70-config-data\") pod \"4e965f02-705b-4631-879f-27070f0bfa70\" (UID: \"4e965f02-705b-4631-879f-27070f0bfa70\") " Feb 19 11:05:22 crc kubenswrapper[4811]: I0219 11:05:22.257442 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4e965f02-705b-4631-879f-27070f0bfa70-test-operator-ephemeral-workdir\") pod \"4e965f02-705b-4631-879f-27070f0bfa70\" (UID: \"4e965f02-705b-4631-879f-27070f0bfa70\") " Feb 19 11:05:22 crc kubenswrapper[4811]: I0219 11:05:22.257517 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4e965f02-705b-4631-879f-27070f0bfa70-openstack-config\") pod \"4e965f02-705b-4631-879f-27070f0bfa70\" (UID: \"4e965f02-705b-4631-879f-27070f0bfa70\") " Feb 19 11:05:22 crc kubenswrapper[4811]: I0219 11:05:22.257555 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4e965f02-705b-4631-879f-27070f0bfa70-test-operator-ephemeral-temporary\") pod \"4e965f02-705b-4631-879f-27070f0bfa70\" (UID: \"4e965f02-705b-4631-879f-27070f0bfa70\") " Feb 19 11:05:22 crc kubenswrapper[4811]: I0219 11:05:22.258275 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e965f02-705b-4631-879f-27070f0bfa70-config-data" (OuterVolumeSpecName: "config-data") pod "4e965f02-705b-4631-879f-27070f0bfa70" (UID: "4e965f02-705b-4631-879f-27070f0bfa70"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 11:05:22 crc kubenswrapper[4811]: I0219 11:05:22.258737 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e965f02-705b-4631-879f-27070f0bfa70-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "4e965f02-705b-4631-879f-27070f0bfa70" (UID: "4e965f02-705b-4631-879f-27070f0bfa70"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:05:22 crc kubenswrapper[4811]: I0219 11:05:22.258918 4811 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e965f02-705b-4631-879f-27070f0bfa70-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 11:05:22 crc kubenswrapper[4811]: I0219 11:05:22.258940 4811 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4e965f02-705b-4631-879f-27070f0bfa70-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 19 11:05:22 crc kubenswrapper[4811]: I0219 11:05:22.266796 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "test-operator-logs") pod "4e965f02-705b-4631-879f-27070f0bfa70" (UID: "4e965f02-705b-4631-879f-27070f0bfa70"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 11:05:22 crc kubenswrapper[4811]: I0219 11:05:22.266842 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e965f02-705b-4631-879f-27070f0bfa70-kube-api-access-ts7rf" (OuterVolumeSpecName: "kube-api-access-ts7rf") pod "4e965f02-705b-4631-879f-27070f0bfa70" (UID: "4e965f02-705b-4631-879f-27070f0bfa70"). InnerVolumeSpecName "kube-api-access-ts7rf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:05:22 crc kubenswrapper[4811]: I0219 11:05:22.267832 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e965f02-705b-4631-879f-27070f0bfa70-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "4e965f02-705b-4631-879f-27070f0bfa70" (UID: "4e965f02-705b-4631-879f-27070f0bfa70"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:05:22 crc kubenswrapper[4811]: I0219 11:05:22.289760 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e965f02-705b-4631-879f-27070f0bfa70-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "4e965f02-705b-4631-879f-27070f0bfa70" (UID: "4e965f02-705b-4631-879f-27070f0bfa70"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 11:05:22 crc kubenswrapper[4811]: I0219 11:05:22.290503 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e965f02-705b-4631-879f-27070f0bfa70-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "4e965f02-705b-4631-879f-27070f0bfa70" (UID: "4e965f02-705b-4631-879f-27070f0bfa70"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 11:05:22 crc kubenswrapper[4811]: I0219 11:05:22.292918 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e965f02-705b-4631-879f-27070f0bfa70-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4e965f02-705b-4631-879f-27070f0bfa70" (UID: "4e965f02-705b-4631-879f-27070f0bfa70"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 11:05:22 crc kubenswrapper[4811]: I0219 11:05:22.321600 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e965f02-705b-4631-879f-27070f0bfa70-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "4e965f02-705b-4631-879f-27070f0bfa70" (UID: "4e965f02-705b-4631-879f-27070f0bfa70"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 11:05:22 crc kubenswrapper[4811]: I0219 11:05:22.361382 4811 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4e965f02-705b-4631-879f-27070f0bfa70-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 19 11:05:22 crc kubenswrapper[4811]: I0219 11:05:22.361429 4811 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4e965f02-705b-4631-879f-27070f0bfa70-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 19 11:05:22 crc kubenswrapper[4811]: I0219 11:05:22.361467 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ts7rf\" (UniqueName: \"kubernetes.io/projected/4e965f02-705b-4631-879f-27070f0bfa70-kube-api-access-ts7rf\") on node \"crc\" DevicePath \"\"" Feb 19 11:05:22 crc kubenswrapper[4811]: I0219 11:05:22.361479 4811 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4e965f02-705b-4631-879f-27070f0bfa70-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 19 11:05:22 crc kubenswrapper[4811]: I0219 11:05:22.361491 4811 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4e965f02-705b-4631-879f-27070f0bfa70-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 19 11:05:22 crc kubenswrapper[4811]: I0219 11:05:22.361504 4811 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4e965f02-705b-4631-879f-27070f0bfa70-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 19 11:05:22 crc kubenswrapper[4811]: I0219 11:05:22.361551 4811 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 19 11:05:22 crc kubenswrapper[4811]: I0219 11:05:22.386862 4811 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 19 11:05:22 crc kubenswrapper[4811]: I0219 11:05:22.463606 4811 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 19 11:05:22 crc kubenswrapper[4811]: I0219 11:05:22.589646 4811 scope.go:117] "RemoveContainer" containerID="ecb77dd42fde5f6c3909c44a47ea1c0a181a1a0c7a82b7351079e66b3bbac5f5" Feb 19 11:05:22 crc kubenswrapper[4811]: E0219 11:05:22.591276 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:05:22 crc kubenswrapper[4811]: I0219 11:05:22.689130 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"4e965f02-705b-4631-879f-27070f0bfa70","Type":"ContainerDied","Data":"1c93c09b80b57980d7f53321e443b85249f6d721cc01a9cf3c3d7fe285eb1470"} Feb 19 11:05:22 crc kubenswrapper[4811]: I0219 11:05:22.689179 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c93c09b80b57980d7f53321e443b85249f6d721cc01a9cf3c3d7fe285eb1470" Feb 19 11:05:22 crc kubenswrapper[4811]: I0219 11:05:22.689246 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 19 11:05:27 crc kubenswrapper[4811]: I0219 11:05:27.815284 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 19 11:05:27 crc kubenswrapper[4811]: E0219 11:05:27.816423 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3de5ea69-4037-4eaa-97cc-f54490a984b2" containerName="extract-utilities" Feb 19 11:05:27 crc kubenswrapper[4811]: I0219 11:05:27.816439 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3de5ea69-4037-4eaa-97cc-f54490a984b2" containerName="extract-utilities" Feb 19 11:05:27 crc kubenswrapper[4811]: E0219 11:05:27.816504 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3de5ea69-4037-4eaa-97cc-f54490a984b2" containerName="extract-content" Feb 19 11:05:27 crc kubenswrapper[4811]: I0219 11:05:27.816511 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3de5ea69-4037-4eaa-97cc-f54490a984b2" containerName="extract-content" Feb 19 11:05:27 crc kubenswrapper[4811]: E0219 11:05:27.816528 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3de5ea69-4037-4eaa-97cc-f54490a984b2" containerName="registry-server" Feb 19 11:05:27 crc kubenswrapper[4811]: I0219 11:05:27.816534 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="3de5ea69-4037-4eaa-97cc-f54490a984b2" containerName="registry-server" Feb 19 11:05:27 crc kubenswrapper[4811]: E0219 11:05:27.816546 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e965f02-705b-4631-879f-27070f0bfa70" containerName="tempest-tests-tempest-tests-runner" Feb 19 11:05:27 crc kubenswrapper[4811]: I0219 11:05:27.816553 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e965f02-705b-4631-879f-27070f0bfa70" containerName="tempest-tests-tempest-tests-runner" Feb 19 11:05:27 crc kubenswrapper[4811]: I0219 11:05:27.816763 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="3de5ea69-4037-4eaa-97cc-f54490a984b2" containerName="registry-server" Feb 19 11:05:27 crc kubenswrapper[4811]: I0219 11:05:27.816772 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e965f02-705b-4631-879f-27070f0bfa70" containerName="tempest-tests-tempest-tests-runner" Feb 19 11:05:27 crc kubenswrapper[4811]: I0219 11:05:27.817612 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 11:05:27 crc kubenswrapper[4811]: I0219 11:05:27.820862 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-kvmmx" Feb 19 11:05:27 crc kubenswrapper[4811]: I0219 11:05:27.826126 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 19 11:05:27 crc kubenswrapper[4811]: I0219 11:05:27.986386 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ttgk\" (UniqueName: \"kubernetes.io/projected/a0b6d7a4-d66c-4621-9fd1-bb45c0d5860b-kube-api-access-7ttgk\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"a0b6d7a4-d66c-4621-9fd1-bb45c0d5860b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 11:05:27 crc kubenswrapper[4811]: I0219 11:05:27.986503 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"a0b6d7a4-d66c-4621-9fd1-bb45c0d5860b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 11:05:28 crc kubenswrapper[4811]: I0219 11:05:28.088929 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ttgk\" (UniqueName: \"kubernetes.io/projected/a0b6d7a4-d66c-4621-9fd1-bb45c0d5860b-kube-api-access-7ttgk\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"a0b6d7a4-d66c-4621-9fd1-bb45c0d5860b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 11:05:28 crc kubenswrapper[4811]: I0219 11:05:28.089010 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"a0b6d7a4-d66c-4621-9fd1-bb45c0d5860b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 11:05:28 crc kubenswrapper[4811]: I0219 11:05:28.089600 4811 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"a0b6d7a4-d66c-4621-9fd1-bb45c0d5860b\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 11:05:28 crc kubenswrapper[4811]: I0219 11:05:28.120243 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ttgk\" (UniqueName: \"kubernetes.io/projected/a0b6d7a4-d66c-4621-9fd1-bb45c0d5860b-kube-api-access-7ttgk\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"a0b6d7a4-d66c-4621-9fd1-bb45c0d5860b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 11:05:28 crc kubenswrapper[4811]: I0219 11:05:28.141089 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"a0b6d7a4-d66c-4621-9fd1-bb45c0d5860b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 11:05:28 crc kubenswrapper[4811]: I0219 11:05:28.147734 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 11:05:28 crc kubenswrapper[4811]: I0219 11:05:28.668203 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 19 11:05:28 crc kubenswrapper[4811]: I0219 11:05:28.757968 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"a0b6d7a4-d66c-4621-9fd1-bb45c0d5860b","Type":"ContainerStarted","Data":"c5f64eadbd4a2bdd0df6bdb75d389f20f9721baededf75a26c68eed0215f9fa4"} Feb 19 11:05:30 crc kubenswrapper[4811]: I0219 11:05:30.782705 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"a0b6d7a4-d66c-4621-9fd1-bb45c0d5860b","Type":"ContainerStarted","Data":"aa87dcee1b32e70179fd973d452bcd0a43a728a289d0199c05969e3ca4c6fe4d"} Feb 19 11:05:36 crc kubenswrapper[4811]: I0219 11:05:36.583186 4811 scope.go:117] "RemoveContainer" containerID="ecb77dd42fde5f6c3909c44a47ea1c0a181a1a0c7a82b7351079e66b3bbac5f5" Feb 19 11:05:37 crc kubenswrapper[4811]: I0219 11:05:37.851242 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" event={"ID":"e2e42c80-bece-4066-abad-05bc661d33f5","Type":"ContainerStarted","Data":"d5f586fcacc6004c20dfa70ee7a04ad9b63b7e2534b6e867fb032ccc9df2a399"} Feb 19 11:05:37 crc kubenswrapper[4811]: I0219 11:05:37.871084 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=9.919621583 podStartE2EDuration="10.871055503s" podCreationTimestamp="2026-02-19 11:05:27 +0000 UTC" firstStartedPulling="2026-02-19 11:05:28.684815585 +0000 UTC m=+3417.211280271" lastFinishedPulling="2026-02-19 11:05:29.636249505 +0000 UTC m=+3418.162714191" observedRunningTime="2026-02-19 11:05:30.80232724 +0000 UTC m=+3419.328791926" watchObservedRunningTime="2026-02-19 11:05:37.871055503 +0000 UTC m=+3426.397520189" Feb 19 11:05:52 crc kubenswrapper[4811]: I0219 11:05:52.228276 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jw49p/must-gather-w84fj"] Feb 19 11:05:52 crc kubenswrapper[4811]: I0219 11:05:52.230589 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jw49p/must-gather-w84fj" Feb 19 11:05:52 crc kubenswrapper[4811]: I0219 11:05:52.239705 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jw49p"/"kube-root-ca.crt" Feb 19 11:05:52 crc kubenswrapper[4811]: I0219 11:05:52.239915 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jw49p"/"openshift-service-ca.crt" Feb 19 11:05:52 crc kubenswrapper[4811]: I0219 11:05:52.259762 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jw49p/must-gather-w84fj"] Feb 19 11:05:52 crc kubenswrapper[4811]: I0219 11:05:52.317824 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/723ec19b-8f91-4289-a85c-c868ee12f4fb-must-gather-output\") pod \"must-gather-w84fj\" (UID: \"723ec19b-8f91-4289-a85c-c868ee12f4fb\") " pod="openshift-must-gather-jw49p/must-gather-w84fj" Feb 19 11:05:52 crc kubenswrapper[4811]: I0219 11:05:52.317960 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cxw5\" (UniqueName: \"kubernetes.io/projected/723ec19b-8f91-4289-a85c-c868ee12f4fb-kube-api-access-9cxw5\") pod \"must-gather-w84fj\" (UID: \"723ec19b-8f91-4289-a85c-c868ee12f4fb\") " pod="openshift-must-gather-jw49p/must-gather-w84fj" Feb 19 11:05:52 crc kubenswrapper[4811]: I0219 11:05:52.419423 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cxw5\" (UniqueName: \"kubernetes.io/projected/723ec19b-8f91-4289-a85c-c868ee12f4fb-kube-api-access-9cxw5\") pod \"must-gather-w84fj\" (UID: \"723ec19b-8f91-4289-a85c-c868ee12f4fb\") " pod="openshift-must-gather-jw49p/must-gather-w84fj" Feb 19 11:05:52 crc kubenswrapper[4811]: I0219 11:05:52.419846 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/723ec19b-8f91-4289-a85c-c868ee12f4fb-must-gather-output\") pod \"must-gather-w84fj\" (UID: \"723ec19b-8f91-4289-a85c-c868ee12f4fb\") " pod="openshift-must-gather-jw49p/must-gather-w84fj" Feb 19 11:05:52 crc kubenswrapper[4811]: I0219 11:05:52.420254 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/723ec19b-8f91-4289-a85c-c868ee12f4fb-must-gather-output\") pod \"must-gather-w84fj\" (UID: \"723ec19b-8f91-4289-a85c-c868ee12f4fb\") " pod="openshift-must-gather-jw49p/must-gather-w84fj" Feb 19 11:05:52 crc kubenswrapper[4811]: I0219 11:05:52.458260 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cxw5\" (UniqueName: \"kubernetes.io/projected/723ec19b-8f91-4289-a85c-c868ee12f4fb-kube-api-access-9cxw5\") pod \"must-gather-w84fj\" (UID: \"723ec19b-8f91-4289-a85c-c868ee12f4fb\") " pod="openshift-must-gather-jw49p/must-gather-w84fj" Feb 19 11:05:52 crc kubenswrapper[4811]: I0219 11:05:52.570803 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jw49p/must-gather-w84fj" Feb 19 11:05:53 crc kubenswrapper[4811]: I0219 11:05:53.082540 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jw49p/must-gather-w84fj"] Feb 19 11:05:54 crc kubenswrapper[4811]: I0219 11:05:54.050768 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jw49p/must-gather-w84fj" event={"ID":"723ec19b-8f91-4289-a85c-c868ee12f4fb","Type":"ContainerStarted","Data":"b8ab940424bd336a5b1f49fe593782937e5570b7ad85657d568bd675f0290aec"} Feb 19 11:06:01 crc kubenswrapper[4811]: I0219 11:06:01.128777 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jw49p/must-gather-w84fj" event={"ID":"723ec19b-8f91-4289-a85c-c868ee12f4fb","Type":"ContainerStarted","Data":"c6bf93c3325b4be3373076140162da821d6dd7cf608e4e1a614b921773dd729e"} Feb 19 11:06:01 crc kubenswrapper[4811]: I0219 11:06:01.129644 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jw49p/must-gather-w84fj" event={"ID":"723ec19b-8f91-4289-a85c-c868ee12f4fb","Type":"ContainerStarted","Data":"b20084dc7da20299b3d07916db53b8ba1082be73720984cba70b7e8045a0955b"} Feb 19 11:06:01 crc kubenswrapper[4811]: I0219 11:06:01.150241 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jw49p/must-gather-w84fj" podStartSLOduration=2.329037777 podStartE2EDuration="9.150219473s" podCreationTimestamp="2026-02-19 11:05:52 +0000 UTC" firstStartedPulling="2026-02-19 11:05:53.097568825 +0000 UTC m=+3441.624033511" lastFinishedPulling="2026-02-19 11:05:59.918750501 +0000 UTC m=+3448.445215207" observedRunningTime="2026-02-19 11:06:01.146463549 +0000 UTC m=+3449.672928235" watchObservedRunningTime="2026-02-19 11:06:01.150219473 +0000 UTC m=+3449.676684149" Feb 19 11:06:04 crc kubenswrapper[4811]: I0219 11:06:04.009038 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jw49p/crc-debug-g59nl"] Feb 19 11:06:04 crc kubenswrapper[4811]: I0219 11:06:04.011494 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jw49p/crc-debug-g59nl" Feb 19 11:06:04 crc kubenswrapper[4811]: I0219 11:06:04.014749 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-jw49p"/"default-dockercfg-k9st2" Feb 19 11:06:04 crc kubenswrapper[4811]: I0219 11:06:04.101974 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs6dx\" (UniqueName: \"kubernetes.io/projected/294259e2-5afb-46a2-a602-36aadb1bb185-kube-api-access-qs6dx\") pod \"crc-debug-g59nl\" (UID: \"294259e2-5afb-46a2-a602-36aadb1bb185\") " pod="openshift-must-gather-jw49p/crc-debug-g59nl" Feb 19 11:06:04 crc kubenswrapper[4811]: I0219 11:06:04.102314 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/294259e2-5afb-46a2-a602-36aadb1bb185-host\") pod \"crc-debug-g59nl\" (UID: \"294259e2-5afb-46a2-a602-36aadb1bb185\") " pod="openshift-must-gather-jw49p/crc-debug-g59nl" Feb 19 11:06:04 crc kubenswrapper[4811]: I0219 11:06:04.204318 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs6dx\" (UniqueName: \"kubernetes.io/projected/294259e2-5afb-46a2-a602-36aadb1bb185-kube-api-access-qs6dx\") pod \"crc-debug-g59nl\" (UID: \"294259e2-5afb-46a2-a602-36aadb1bb185\") " pod="openshift-must-gather-jw49p/crc-debug-g59nl" Feb 19 11:06:04 crc kubenswrapper[4811]: I0219 11:06:04.204439 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/294259e2-5afb-46a2-a602-36aadb1bb185-host\") pod \"crc-debug-g59nl\" (UID: \"294259e2-5afb-46a2-a602-36aadb1bb185\") " pod="openshift-must-gather-jw49p/crc-debug-g59nl" Feb 19 11:06:04 crc kubenswrapper[4811]: I0219 11:06:04.204548 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/294259e2-5afb-46a2-a602-36aadb1bb185-host\") pod \"crc-debug-g59nl\" (UID: \"294259e2-5afb-46a2-a602-36aadb1bb185\") " pod="openshift-must-gather-jw49p/crc-debug-g59nl" Feb 19 11:06:04 crc kubenswrapper[4811]: I0219 11:06:04.226042 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs6dx\" (UniqueName: \"kubernetes.io/projected/294259e2-5afb-46a2-a602-36aadb1bb185-kube-api-access-qs6dx\") pod \"crc-debug-g59nl\" (UID: \"294259e2-5afb-46a2-a602-36aadb1bb185\") " pod="openshift-must-gather-jw49p/crc-debug-g59nl" Feb 19 11:06:04 crc kubenswrapper[4811]: I0219 11:06:04.334917 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jw49p/crc-debug-g59nl" Feb 19 11:06:04 crc kubenswrapper[4811]: W0219 11:06:04.378681 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod294259e2_5afb_46a2_a602_36aadb1bb185.slice/crio-3777b4544b3f3b2546f47f58de6c7013f5db59b184f474965390e3b2b0db47af WatchSource:0}: Error finding container 3777b4544b3f3b2546f47f58de6c7013f5db59b184f474965390e3b2b0db47af: Status 404 returned error can't find the container with id 3777b4544b3f3b2546f47f58de6c7013f5db59b184f474965390e3b2b0db47af Feb 19 11:06:05 crc kubenswrapper[4811]: I0219 11:06:05.163863 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jw49p/crc-debug-g59nl" event={"ID":"294259e2-5afb-46a2-a602-36aadb1bb185","Type":"ContainerStarted","Data":"3777b4544b3f3b2546f47f58de6c7013f5db59b184f474965390e3b2b0db47af"} Feb 19 11:06:16 crc kubenswrapper[4811]: I0219 11:06:16.291364 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jw49p/crc-debug-g59nl" event={"ID":"294259e2-5afb-46a2-a602-36aadb1bb185","Type":"ContainerStarted","Data":"c06a9ab57b361aed0fb06be2148ae3363fa762e1794e3135ee5103ba16b0e748"} Feb 19 11:06:16 crc kubenswrapper[4811]: I0219 11:06:16.314650 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jw49p/crc-debug-g59nl" podStartSLOduration=2.594178253 podStartE2EDuration="13.314625481s" podCreationTimestamp="2026-02-19 11:06:03 +0000 UTC" firstStartedPulling="2026-02-19 11:06:04.38103546 +0000 UTC m=+3452.907500146" lastFinishedPulling="2026-02-19 11:06:15.101482678 +0000 UTC m=+3463.627947374" observedRunningTime="2026-02-19 11:06:16.307664996 +0000 UTC m=+3464.834129692" watchObservedRunningTime="2026-02-19 11:06:16.314625481 +0000 UTC m=+3464.841090167" Feb 19 11:06:21 crc kubenswrapper[4811]: I0219 11:06:21.397078 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xjgrn"] Feb 19 11:06:21 crc kubenswrapper[4811]: I0219 11:06:21.399705 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xjgrn" Feb 19 11:06:21 crc kubenswrapper[4811]: I0219 11:06:21.427953 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xjgrn"] Feb 19 11:06:21 crc kubenswrapper[4811]: I0219 11:06:21.478488 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tphxs\" (UniqueName: \"kubernetes.io/projected/ef06d1a3-6404-4a0b-9522-2a3d7fe26490-kube-api-access-tphxs\") pod \"redhat-operators-xjgrn\" (UID: \"ef06d1a3-6404-4a0b-9522-2a3d7fe26490\") " pod="openshift-marketplace/redhat-operators-xjgrn" Feb 19 11:06:21 crc kubenswrapper[4811]: I0219 11:06:21.478632 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef06d1a3-6404-4a0b-9522-2a3d7fe26490-catalog-content\") pod \"redhat-operators-xjgrn\" (UID: \"ef06d1a3-6404-4a0b-9522-2a3d7fe26490\") " pod="openshift-marketplace/redhat-operators-xjgrn" Feb 19 11:06:21 crc kubenswrapper[4811]: I0219 11:06:21.478964 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef06d1a3-6404-4a0b-9522-2a3d7fe26490-utilities\") pod \"redhat-operators-xjgrn\" (UID: \"ef06d1a3-6404-4a0b-9522-2a3d7fe26490\") " pod="openshift-marketplace/redhat-operators-xjgrn" Feb 19 11:06:21 crc kubenswrapper[4811]: I0219 11:06:21.580997 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef06d1a3-6404-4a0b-9522-2a3d7fe26490-utilities\") pod \"redhat-operators-xjgrn\" (UID: \"ef06d1a3-6404-4a0b-9522-2a3d7fe26490\") " pod="openshift-marketplace/redhat-operators-xjgrn" Feb 19 11:06:21 crc kubenswrapper[4811]: I0219 11:06:21.581117 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tphxs\" (UniqueName: \"kubernetes.io/projected/ef06d1a3-6404-4a0b-9522-2a3d7fe26490-kube-api-access-tphxs\") pod \"redhat-operators-xjgrn\" (UID: \"ef06d1a3-6404-4a0b-9522-2a3d7fe26490\") " pod="openshift-marketplace/redhat-operators-xjgrn" Feb 19 11:06:21 crc kubenswrapper[4811]: I0219 11:06:21.581193 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef06d1a3-6404-4a0b-9522-2a3d7fe26490-catalog-content\") pod \"redhat-operators-xjgrn\" (UID: \"ef06d1a3-6404-4a0b-9522-2a3d7fe26490\") " pod="openshift-marketplace/redhat-operators-xjgrn" Feb 19 11:06:21 crc kubenswrapper[4811]: I0219 11:06:21.581738 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef06d1a3-6404-4a0b-9522-2a3d7fe26490-utilities\") pod \"redhat-operators-xjgrn\" (UID: \"ef06d1a3-6404-4a0b-9522-2a3d7fe26490\") " pod="openshift-marketplace/redhat-operators-xjgrn" Feb 19 11:06:21 crc kubenswrapper[4811]: I0219 11:06:21.581762 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef06d1a3-6404-4a0b-9522-2a3d7fe26490-catalog-content\") pod \"redhat-operators-xjgrn\" (UID: \"ef06d1a3-6404-4a0b-9522-2a3d7fe26490\") " pod="openshift-marketplace/redhat-operators-xjgrn" Feb 19 11:06:21 crc kubenswrapper[4811]: I0219 11:06:21.608159 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tphxs\" (UniqueName: \"kubernetes.io/projected/ef06d1a3-6404-4a0b-9522-2a3d7fe26490-kube-api-access-tphxs\") pod \"redhat-operators-xjgrn\" (UID: \"ef06d1a3-6404-4a0b-9522-2a3d7fe26490\") " pod="openshift-marketplace/redhat-operators-xjgrn" Feb 19 11:06:21 crc kubenswrapper[4811]: I0219 11:06:21.734493 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xjgrn" Feb 19 11:06:22 crc kubenswrapper[4811]: I0219 11:06:22.780039 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xjgrn"] Feb 19 11:06:23 crc kubenswrapper[4811]: I0219 11:06:23.377319 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xjgrn" event={"ID":"ef06d1a3-6404-4a0b-9522-2a3d7fe26490","Type":"ContainerDied","Data":"89333a06e65ed0fb6add81ee1757a74cac846410dbbc6873109638a904cd9d04"} Feb 19 11:06:23 crc kubenswrapper[4811]: I0219 11:06:23.377134 4811 generic.go:334] "Generic (PLEG): container finished" podID="ef06d1a3-6404-4a0b-9522-2a3d7fe26490" containerID="89333a06e65ed0fb6add81ee1757a74cac846410dbbc6873109638a904cd9d04" exitCode=0 Feb 19 11:06:23 crc kubenswrapper[4811]: I0219 11:06:23.378001 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xjgrn" event={"ID":"ef06d1a3-6404-4a0b-9522-2a3d7fe26490","Type":"ContainerStarted","Data":"bc01cf3a264669bb72a6ec774bf6df731730e1d3412ed5234724cd2c8fced7ee"} Feb 19 11:06:25 crc kubenswrapper[4811]: I0219 11:06:25.399890 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xjgrn" event={"ID":"ef06d1a3-6404-4a0b-9522-2a3d7fe26490","Type":"ContainerStarted","Data":"07aa2c57645f23f8d38dc9c66a9e7cb22a943cb393bb259aa3a1ed39fff41712"} Feb 19 11:06:27 crc kubenswrapper[4811]: I0219 11:06:27.429673 4811 generic.go:334] "Generic (PLEG): container finished" podID="ef06d1a3-6404-4a0b-9522-2a3d7fe26490" containerID="07aa2c57645f23f8d38dc9c66a9e7cb22a943cb393bb259aa3a1ed39fff41712" exitCode=0 Feb 19 11:06:27 crc kubenswrapper[4811]: I0219 11:06:27.429742 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xjgrn" event={"ID":"ef06d1a3-6404-4a0b-9522-2a3d7fe26490","Type":"ContainerDied","Data":"07aa2c57645f23f8d38dc9c66a9e7cb22a943cb393bb259aa3a1ed39fff41712"} Feb 19 11:06:30 crc kubenswrapper[4811]: I0219 11:06:30.466289 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xjgrn" event={"ID":"ef06d1a3-6404-4a0b-9522-2a3d7fe26490","Type":"ContainerStarted","Data":"4fe9c272e3c207589aeed4105f5527f1815879f978e65c1ae76df1065eaa0c6d"} Feb 19 11:06:30 crc kubenswrapper[4811]: I0219 11:06:30.508344 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xjgrn" podStartSLOduration=4.233867704 podStartE2EDuration="9.508316736s" podCreationTimestamp="2026-02-19 11:06:21 +0000 UTC" firstStartedPulling="2026-02-19 11:06:23.379673902 +0000 UTC m=+3471.906138588" lastFinishedPulling="2026-02-19 11:06:28.654122944 +0000 UTC m=+3477.180587620" observedRunningTime="2026-02-19 11:06:30.4952926 +0000 UTC m=+3479.021757296" watchObservedRunningTime="2026-02-19 11:06:30.508316736 +0000 UTC m=+3479.034781422" Feb 19 11:06:31 crc kubenswrapper[4811]: I0219 11:06:31.735111 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xjgrn" Feb 19 11:06:31 crc kubenswrapper[4811]: I0219 11:06:31.735633 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xjgrn" Feb 19 11:06:32 crc kubenswrapper[4811]: I0219 11:06:32.793976 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xjgrn" podUID="ef06d1a3-6404-4a0b-9522-2a3d7fe26490" containerName="registry-server" probeResult="failure" output=< Feb 19 11:06:32 crc kubenswrapper[4811]: timeout: failed to connect service ":50051" within 1s Feb 19 11:06:32 crc kubenswrapper[4811]: > Feb 19 11:06:41 crc kubenswrapper[4811]: I0219 11:06:41.727765 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b4d5q"] Feb 19 11:06:41 crc kubenswrapper[4811]: I0219 11:06:41.730722 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b4d5q" Feb 19 11:06:41 crc kubenswrapper[4811]: I0219 11:06:41.740091 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b4d5q"] Feb 19 11:06:41 crc kubenswrapper[4811]: I0219 11:06:41.830634 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aec10adb-ed64-4434-b2a8-dd504a577c35-utilities\") pod \"certified-operators-b4d5q\" (UID: \"aec10adb-ed64-4434-b2a8-dd504a577c35\") " pod="openshift-marketplace/certified-operators-b4d5q" Feb 19 11:06:41 crc kubenswrapper[4811]: I0219 11:06:41.830731 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aec10adb-ed64-4434-b2a8-dd504a577c35-catalog-content\") pod \"certified-operators-b4d5q\" (UID: \"aec10adb-ed64-4434-b2a8-dd504a577c35\") " pod="openshift-marketplace/certified-operators-b4d5q" Feb 19 11:06:41 crc kubenswrapper[4811]: I0219 11:06:41.830795 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnnq6\" (UniqueName: \"kubernetes.io/projected/aec10adb-ed64-4434-b2a8-dd504a577c35-kube-api-access-rnnq6\") pod \"certified-operators-b4d5q\" (UID: \"aec10adb-ed64-4434-b2a8-dd504a577c35\") " pod="openshift-marketplace/certified-operators-b4d5q" Feb 19 11:06:41 crc kubenswrapper[4811]: I0219 11:06:41.932761 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnnq6\" (UniqueName: \"kubernetes.io/projected/aec10adb-ed64-4434-b2a8-dd504a577c35-kube-api-access-rnnq6\") pod \"certified-operators-b4d5q\" (UID: \"aec10adb-ed64-4434-b2a8-dd504a577c35\") " pod="openshift-marketplace/certified-operators-b4d5q" Feb 19 11:06:41 crc kubenswrapper[4811]: I0219 11:06:41.932902 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aec10adb-ed64-4434-b2a8-dd504a577c35-utilities\") pod \"certified-operators-b4d5q\" (UID: \"aec10adb-ed64-4434-b2a8-dd504a577c35\") " pod="openshift-marketplace/certified-operators-b4d5q" Feb 19 11:06:41 crc kubenswrapper[4811]: I0219 11:06:41.932956 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aec10adb-ed64-4434-b2a8-dd504a577c35-catalog-content\") pod \"certified-operators-b4d5q\" (UID: \"aec10adb-ed64-4434-b2a8-dd504a577c35\") " pod="openshift-marketplace/certified-operators-b4d5q" Feb 19 11:06:41 crc kubenswrapper[4811]: I0219 11:06:41.933389 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aec10adb-ed64-4434-b2a8-dd504a577c35-catalog-content\") pod \"certified-operators-b4d5q\" (UID: \"aec10adb-ed64-4434-b2a8-dd504a577c35\") " pod="openshift-marketplace/certified-operators-b4d5q" Feb 19 11:06:41 crc kubenswrapper[4811]: I0219 11:06:41.933518 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aec10adb-ed64-4434-b2a8-dd504a577c35-utilities\") pod \"certified-operators-b4d5q\" (UID: \"aec10adb-ed64-4434-b2a8-dd504a577c35\") " pod="openshift-marketplace/certified-operators-b4d5q" Feb 19 11:06:41 crc kubenswrapper[4811]: I0219 11:06:41.960620 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnnq6\" (UniqueName: \"kubernetes.io/projected/aec10adb-ed64-4434-b2a8-dd504a577c35-kube-api-access-rnnq6\") pod \"certified-operators-b4d5q\" (UID: \"aec10adb-ed64-4434-b2a8-dd504a577c35\") " pod="openshift-marketplace/certified-operators-b4d5q" Feb 19 11:06:42 crc kubenswrapper[4811]: I0219 11:06:42.113464 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b4d5q" Feb 19 11:06:42 crc kubenswrapper[4811]: I0219 11:06:42.697277 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b4d5q"] Feb 19 11:06:42 crc kubenswrapper[4811]: I0219 11:06:42.844388 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xjgrn" podUID="ef06d1a3-6404-4a0b-9522-2a3d7fe26490" containerName="registry-server" probeResult="failure" output=< Feb 19 11:06:42 crc kubenswrapper[4811]: timeout: failed to connect service ":50051" within 1s Feb 19 11:06:42 crc kubenswrapper[4811]: > Feb 19 11:06:43 crc kubenswrapper[4811]: I0219 11:06:43.591733 4811 generic.go:334] "Generic (PLEG): container finished" podID="aec10adb-ed64-4434-b2a8-dd504a577c35" containerID="d0298ee7fd64e83730e5c2189f480bd28cdc2d5607782aa823ad35458fc8e32d" exitCode=0 Feb 19 11:06:43 crc kubenswrapper[4811]: I0219 11:06:43.591954 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b4d5q" event={"ID":"aec10adb-ed64-4434-b2a8-dd504a577c35","Type":"ContainerDied","Data":"d0298ee7fd64e83730e5c2189f480bd28cdc2d5607782aa823ad35458fc8e32d"} Feb 19 11:06:43 crc kubenswrapper[4811]: I0219 11:06:43.592431 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b4d5q" event={"ID":"aec10adb-ed64-4434-b2a8-dd504a577c35","Type":"ContainerStarted","Data":"5287c5bf36b329b3e5788d35f6bc45ec12c7002ffe5468a46ad5f2d321d7885b"} Feb 19 11:06:45 crc kubenswrapper[4811]: I0219 11:06:45.614402 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b4d5q" event={"ID":"aec10adb-ed64-4434-b2a8-dd504a577c35","Type":"ContainerStarted","Data":"e37bc6cee8226b9e2e16d3e1e2d5cd1b3c699beed96a7d6a7e07d9d9044b0762"} Feb 19 11:06:47 crc kubenswrapper[4811]: I0219 11:06:47.634743 4811 generic.go:334] "Generic (PLEG): container finished" podID="aec10adb-ed64-4434-b2a8-dd504a577c35" containerID="e37bc6cee8226b9e2e16d3e1e2d5cd1b3c699beed96a7d6a7e07d9d9044b0762" exitCode=0 Feb 19 11:06:47 crc kubenswrapper[4811]: I0219 11:06:47.634833 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b4d5q" event={"ID":"aec10adb-ed64-4434-b2a8-dd504a577c35","Type":"ContainerDied","Data":"e37bc6cee8226b9e2e16d3e1e2d5cd1b3c699beed96a7d6a7e07d9d9044b0762"} Feb 19 11:06:48 crc kubenswrapper[4811]: I0219 11:06:48.648560 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b4d5q" event={"ID":"aec10adb-ed64-4434-b2a8-dd504a577c35","Type":"ContainerStarted","Data":"093e13b308d8f701331ca501285d34faf03950615171e093e618a173d125c360"} Feb 19 11:06:48 crc kubenswrapper[4811]: I0219 11:06:48.674669 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b4d5q" podStartSLOduration=3.132029661 podStartE2EDuration="7.67464282s" podCreationTimestamp="2026-02-19 11:06:41 +0000 UTC" firstStartedPulling="2026-02-19 11:06:43.595532029 +0000 UTC m=+3492.121996715" lastFinishedPulling="2026-02-19 11:06:48.138145188 +0000 UTC m=+3496.664609874" observedRunningTime="2026-02-19 11:06:48.671260696 +0000 UTC m=+3497.197725382" watchObservedRunningTime="2026-02-19 11:06:48.67464282 +0000 UTC m=+3497.201107506" Feb 19 11:06:52 crc kubenswrapper[4811]: I0219 11:06:52.114054 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b4d5q" Feb 19 11:06:52 crc kubenswrapper[4811]: I0219 11:06:52.117121 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b4d5q" Feb 19 11:06:52 crc kubenswrapper[4811]: I0219 11:06:52.162154 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b4d5q" Feb 19 11:06:52 crc kubenswrapper[4811]: I0219 11:06:52.792655 4811 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xjgrn" podUID="ef06d1a3-6404-4a0b-9522-2a3d7fe26490" containerName="registry-server" probeResult="failure" output=< Feb 19 11:06:52 crc kubenswrapper[4811]: timeout: failed to connect service ":50051" within 1s Feb 19 11:06:52 crc kubenswrapper[4811]: > Feb 19 11:06:53 crc kubenswrapper[4811]: I0219 11:06:53.749786 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b4d5q" Feb 19 11:06:53 crc kubenswrapper[4811]: I0219 11:06:53.803775 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b4d5q"] Feb 19 11:06:55 crc kubenswrapper[4811]: I0219 11:06:55.718799 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b4d5q" podUID="aec10adb-ed64-4434-b2a8-dd504a577c35" containerName="registry-server" containerID="cri-o://093e13b308d8f701331ca501285d34faf03950615171e093e618a173d125c360" gracePeriod=2 Feb 19 11:06:56 crc kubenswrapper[4811]: I0219 11:06:56.184706 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b4d5q" Feb 19 11:06:56 crc kubenswrapper[4811]: I0219 11:06:56.262198 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnnq6\" (UniqueName: \"kubernetes.io/projected/aec10adb-ed64-4434-b2a8-dd504a577c35-kube-api-access-rnnq6\") pod \"aec10adb-ed64-4434-b2a8-dd504a577c35\" (UID: \"aec10adb-ed64-4434-b2a8-dd504a577c35\") " Feb 19 11:06:56 crc kubenswrapper[4811]: I0219 11:06:56.262401 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aec10adb-ed64-4434-b2a8-dd504a577c35-utilities\") pod \"aec10adb-ed64-4434-b2a8-dd504a577c35\" (UID: \"aec10adb-ed64-4434-b2a8-dd504a577c35\") " Feb 19 11:06:56 crc kubenswrapper[4811]: I0219 11:06:56.262476 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aec10adb-ed64-4434-b2a8-dd504a577c35-catalog-content\") pod \"aec10adb-ed64-4434-b2a8-dd504a577c35\" (UID: \"aec10adb-ed64-4434-b2a8-dd504a577c35\") " Feb 19 11:06:56 crc kubenswrapper[4811]: I0219 11:06:56.265612 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aec10adb-ed64-4434-b2a8-dd504a577c35-utilities" (OuterVolumeSpecName: "utilities") pod "aec10adb-ed64-4434-b2a8-dd504a577c35" (UID: "aec10adb-ed64-4434-b2a8-dd504a577c35"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:06:56 crc kubenswrapper[4811]: I0219 11:06:56.293848 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aec10adb-ed64-4434-b2a8-dd504a577c35-kube-api-access-rnnq6" (OuterVolumeSpecName: "kube-api-access-rnnq6") pod "aec10adb-ed64-4434-b2a8-dd504a577c35" (UID: "aec10adb-ed64-4434-b2a8-dd504a577c35"). InnerVolumeSpecName "kube-api-access-rnnq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:06:56 crc kubenswrapper[4811]: I0219 11:06:56.356820 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aec10adb-ed64-4434-b2a8-dd504a577c35-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aec10adb-ed64-4434-b2a8-dd504a577c35" (UID: "aec10adb-ed64-4434-b2a8-dd504a577c35"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:06:56 crc kubenswrapper[4811]: I0219 11:06:56.364990 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnnq6\" (UniqueName: \"kubernetes.io/projected/aec10adb-ed64-4434-b2a8-dd504a577c35-kube-api-access-rnnq6\") on node \"crc\" DevicePath \"\"" Feb 19 11:06:56 crc kubenswrapper[4811]: I0219 11:06:56.365032 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aec10adb-ed64-4434-b2a8-dd504a577c35-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 11:06:56 crc kubenswrapper[4811]: I0219 11:06:56.365043 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aec10adb-ed64-4434-b2a8-dd504a577c35-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 11:06:56 crc kubenswrapper[4811]: I0219 11:06:56.732503 4811 generic.go:334] "Generic (PLEG): container finished" podID="aec10adb-ed64-4434-b2a8-dd504a577c35" containerID="093e13b308d8f701331ca501285d34faf03950615171e093e618a173d125c360" exitCode=0 Feb 19 11:06:56 crc kubenswrapper[4811]: I0219 11:06:56.732573 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b4d5q" event={"ID":"aec10adb-ed64-4434-b2a8-dd504a577c35","Type":"ContainerDied","Data":"093e13b308d8f701331ca501285d34faf03950615171e093e618a173d125c360"} Feb 19 11:06:56 crc kubenswrapper[4811]: I0219 11:06:56.732615 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b4d5q" event={"ID":"aec10adb-ed64-4434-b2a8-dd504a577c35","Type":"ContainerDied","Data":"5287c5bf36b329b3e5788d35f6bc45ec12c7002ffe5468a46ad5f2d321d7885b"} Feb 19 11:06:56 crc kubenswrapper[4811]: I0219 11:06:56.732643 4811 scope.go:117] "RemoveContainer" containerID="093e13b308d8f701331ca501285d34faf03950615171e093e618a173d125c360" Feb 19 11:06:56 crc kubenswrapper[4811]: I0219 11:06:56.732863 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b4d5q" Feb 19 11:06:56 crc kubenswrapper[4811]: I0219 11:06:56.817659 4811 scope.go:117] "RemoveContainer" containerID="e37bc6cee8226b9e2e16d3e1e2d5cd1b3c699beed96a7d6a7e07d9d9044b0762" Feb 19 11:06:56 crc kubenswrapper[4811]: I0219 11:06:56.819385 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b4d5q"] Feb 19 11:06:56 crc kubenswrapper[4811]: I0219 11:06:56.832935 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b4d5q"] Feb 19 11:06:56 crc kubenswrapper[4811]: I0219 11:06:56.846308 4811 scope.go:117] "RemoveContainer" containerID="d0298ee7fd64e83730e5c2189f480bd28cdc2d5607782aa823ad35458fc8e32d" Feb 19 11:06:56 crc kubenswrapper[4811]: I0219 11:06:56.886233 4811 scope.go:117] "RemoveContainer" containerID="093e13b308d8f701331ca501285d34faf03950615171e093e618a173d125c360" Feb 19 11:06:56 crc kubenswrapper[4811]: E0219 11:06:56.887815 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"093e13b308d8f701331ca501285d34faf03950615171e093e618a173d125c360\": container with ID starting with 093e13b308d8f701331ca501285d34faf03950615171e093e618a173d125c360 not found: ID does not exist" containerID="093e13b308d8f701331ca501285d34faf03950615171e093e618a173d125c360" Feb 19 11:06:56 crc kubenswrapper[4811]: I0219 11:06:56.887859 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"093e13b308d8f701331ca501285d34faf03950615171e093e618a173d125c360"} err="failed to get container status \"093e13b308d8f701331ca501285d34faf03950615171e093e618a173d125c360\": rpc error: code = NotFound desc = could not find container \"093e13b308d8f701331ca501285d34faf03950615171e093e618a173d125c360\": container with ID starting with 093e13b308d8f701331ca501285d34faf03950615171e093e618a173d125c360 not found: ID does not exist" Feb 19 11:06:56 crc kubenswrapper[4811]: I0219 11:06:56.887897 4811 scope.go:117] "RemoveContainer" containerID="e37bc6cee8226b9e2e16d3e1e2d5cd1b3c699beed96a7d6a7e07d9d9044b0762" Feb 19 11:06:56 crc kubenswrapper[4811]: E0219 11:06:56.888721 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e37bc6cee8226b9e2e16d3e1e2d5cd1b3c699beed96a7d6a7e07d9d9044b0762\": container with ID starting with e37bc6cee8226b9e2e16d3e1e2d5cd1b3c699beed96a7d6a7e07d9d9044b0762 not found: ID does not exist" containerID="e37bc6cee8226b9e2e16d3e1e2d5cd1b3c699beed96a7d6a7e07d9d9044b0762" Feb 19 11:06:56 crc kubenswrapper[4811]: I0219 11:06:56.888759 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e37bc6cee8226b9e2e16d3e1e2d5cd1b3c699beed96a7d6a7e07d9d9044b0762"} err="failed to get container status \"e37bc6cee8226b9e2e16d3e1e2d5cd1b3c699beed96a7d6a7e07d9d9044b0762\": rpc error: code = NotFound desc = could not find container \"e37bc6cee8226b9e2e16d3e1e2d5cd1b3c699beed96a7d6a7e07d9d9044b0762\": container with ID starting with e37bc6cee8226b9e2e16d3e1e2d5cd1b3c699beed96a7d6a7e07d9d9044b0762 not found: ID does not exist" Feb 19 11:06:56 crc kubenswrapper[4811]: I0219 11:06:56.888777 4811 scope.go:117] "RemoveContainer" containerID="d0298ee7fd64e83730e5c2189f480bd28cdc2d5607782aa823ad35458fc8e32d" Feb 19 11:06:56 crc kubenswrapper[4811]: E0219 11:06:56.889086 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0298ee7fd64e83730e5c2189f480bd28cdc2d5607782aa823ad35458fc8e32d\": container with ID starting with d0298ee7fd64e83730e5c2189f480bd28cdc2d5607782aa823ad35458fc8e32d not found: ID does not exist" containerID="d0298ee7fd64e83730e5c2189f480bd28cdc2d5607782aa823ad35458fc8e32d" Feb 19 11:06:56 crc kubenswrapper[4811]: I0219 11:06:56.889138 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0298ee7fd64e83730e5c2189f480bd28cdc2d5607782aa823ad35458fc8e32d"} err="failed to get container status \"d0298ee7fd64e83730e5c2189f480bd28cdc2d5607782aa823ad35458fc8e32d\": rpc error: code = NotFound desc = could not find container \"d0298ee7fd64e83730e5c2189f480bd28cdc2d5607782aa823ad35458fc8e32d\": container with ID starting with d0298ee7fd64e83730e5c2189f480bd28cdc2d5607782aa823ad35458fc8e32d not found: ID does not exist" Feb 19 11:06:57 crc kubenswrapper[4811]: I0219 11:06:57.747129 4811 generic.go:334] "Generic (PLEG): container finished" podID="294259e2-5afb-46a2-a602-36aadb1bb185" containerID="c06a9ab57b361aed0fb06be2148ae3363fa762e1794e3135ee5103ba16b0e748" exitCode=0 Feb 19 11:06:57 crc kubenswrapper[4811]: I0219 11:06:57.747209 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jw49p/crc-debug-g59nl" event={"ID":"294259e2-5afb-46a2-a602-36aadb1bb185","Type":"ContainerDied","Data":"c06a9ab57b361aed0fb06be2148ae3363fa762e1794e3135ee5103ba16b0e748"} Feb 19 11:06:58 crc kubenswrapper[4811]: I0219 11:06:58.593076 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aec10adb-ed64-4434-b2a8-dd504a577c35" path="/var/lib/kubelet/pods/aec10adb-ed64-4434-b2a8-dd504a577c35/volumes" Feb 19 11:06:58 crc kubenswrapper[4811]: I0219 11:06:58.868379 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jw49p/crc-debug-g59nl" Feb 19 11:06:58 crc kubenswrapper[4811]: I0219 11:06:58.903072 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jw49p/crc-debug-g59nl"] Feb 19 11:06:58 crc kubenswrapper[4811]: I0219 11:06:58.913872 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs6dx\" (UniqueName: \"kubernetes.io/projected/294259e2-5afb-46a2-a602-36aadb1bb185-kube-api-access-qs6dx\") pod \"294259e2-5afb-46a2-a602-36aadb1bb185\" (UID: \"294259e2-5afb-46a2-a602-36aadb1bb185\") " Feb 19 11:06:58 crc kubenswrapper[4811]: I0219 11:06:58.914061 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/294259e2-5afb-46a2-a602-36aadb1bb185-host\") pod \"294259e2-5afb-46a2-a602-36aadb1bb185\" (UID: \"294259e2-5afb-46a2-a602-36aadb1bb185\") " Feb 19 11:06:58 crc kubenswrapper[4811]: I0219 11:06:58.914348 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/294259e2-5afb-46a2-a602-36aadb1bb185-host" (OuterVolumeSpecName: "host") pod "294259e2-5afb-46a2-a602-36aadb1bb185" (UID: "294259e2-5afb-46a2-a602-36aadb1bb185"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 11:06:58 crc kubenswrapper[4811]: I0219 11:06:58.914436 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jw49p/crc-debug-g59nl"] Feb 19 11:06:58 crc kubenswrapper[4811]: I0219 11:06:58.914876 4811 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/294259e2-5afb-46a2-a602-36aadb1bb185-host\") on node \"crc\" DevicePath \"\"" Feb 19 11:06:58 crc kubenswrapper[4811]: I0219 11:06:58.920602 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/294259e2-5afb-46a2-a602-36aadb1bb185-kube-api-access-qs6dx" (OuterVolumeSpecName: "kube-api-access-qs6dx") pod "294259e2-5afb-46a2-a602-36aadb1bb185" (UID: "294259e2-5afb-46a2-a602-36aadb1bb185"). InnerVolumeSpecName "kube-api-access-qs6dx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:06:59 crc kubenswrapper[4811]: I0219 11:06:59.016762 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs6dx\" (UniqueName: \"kubernetes.io/projected/294259e2-5afb-46a2-a602-36aadb1bb185-kube-api-access-qs6dx\") on node \"crc\" DevicePath \"\"" Feb 19 11:06:59 crc kubenswrapper[4811]: I0219 11:06:59.768768 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3777b4544b3f3b2546f47f58de6c7013f5db59b184f474965390e3b2b0db47af" Feb 19 11:06:59 crc kubenswrapper[4811]: I0219 11:06:59.768815 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jw49p/crc-debug-g59nl" Feb 19 11:07:00 crc kubenswrapper[4811]: I0219 11:07:00.175040 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jw49p/crc-debug-djwfs"] Feb 19 11:07:00 crc kubenswrapper[4811]: E0219 11:07:00.177062 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aec10adb-ed64-4434-b2a8-dd504a577c35" containerName="extract-content" Feb 19 11:07:00 crc kubenswrapper[4811]: I0219 11:07:00.177160 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="aec10adb-ed64-4434-b2a8-dd504a577c35" containerName="extract-content" Feb 19 11:07:00 crc kubenswrapper[4811]: E0219 11:07:00.177309 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aec10adb-ed64-4434-b2a8-dd504a577c35" containerName="extract-utilities" Feb 19 11:07:00 crc kubenswrapper[4811]: I0219 11:07:00.177409 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="aec10adb-ed64-4434-b2a8-dd504a577c35" containerName="extract-utilities" Feb 19 11:07:00 crc kubenswrapper[4811]: E0219 11:07:00.179734 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aec10adb-ed64-4434-b2a8-dd504a577c35" containerName="registry-server" Feb 19 11:07:00 crc kubenswrapper[4811]: I0219 11:07:00.179913 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="aec10adb-ed64-4434-b2a8-dd504a577c35" containerName="registry-server" Feb 19 11:07:00 crc kubenswrapper[4811]: E0219 11:07:00.179994 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="294259e2-5afb-46a2-a602-36aadb1bb185" containerName="container-00" Feb 19 11:07:00 crc kubenswrapper[4811]: I0219 11:07:00.180055 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="294259e2-5afb-46a2-a602-36aadb1bb185" containerName="container-00" Feb 19 11:07:00 crc kubenswrapper[4811]: I0219 11:07:00.180575 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="aec10adb-ed64-4434-b2a8-dd504a577c35" containerName="registry-server" Feb 19 11:07:00 crc kubenswrapper[4811]: I0219 11:07:00.180662 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="294259e2-5afb-46a2-a602-36aadb1bb185" containerName="container-00" Feb 19 11:07:00 crc kubenswrapper[4811]: I0219 11:07:00.181443 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jw49p/crc-debug-djwfs" Feb 19 11:07:00 crc kubenswrapper[4811]: I0219 11:07:00.187236 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-jw49p"/"default-dockercfg-k9st2" Feb 19 11:07:00 crc kubenswrapper[4811]: I0219 11:07:00.242853 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj6q7\" (UniqueName: \"kubernetes.io/projected/c9fee662-a48c-488f-a561-aaaaec65440b-kube-api-access-vj6q7\") pod \"crc-debug-djwfs\" (UID: \"c9fee662-a48c-488f-a561-aaaaec65440b\") " pod="openshift-must-gather-jw49p/crc-debug-djwfs" Feb 19 11:07:00 crc kubenswrapper[4811]: I0219 11:07:00.243017 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c9fee662-a48c-488f-a561-aaaaec65440b-host\") pod \"crc-debug-djwfs\" (UID: \"c9fee662-a48c-488f-a561-aaaaec65440b\") " pod="openshift-must-gather-jw49p/crc-debug-djwfs" Feb 19 11:07:00 crc kubenswrapper[4811]: I0219 11:07:00.345250 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c9fee662-a48c-488f-a561-aaaaec65440b-host\") pod \"crc-debug-djwfs\" (UID: \"c9fee662-a48c-488f-a561-aaaaec65440b\") " pod="openshift-must-gather-jw49p/crc-debug-djwfs" Feb 19 11:07:00 crc kubenswrapper[4811]: I0219 11:07:00.345384 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj6q7\" (UniqueName: \"kubernetes.io/projected/c9fee662-a48c-488f-a561-aaaaec65440b-kube-api-access-vj6q7\") pod \"crc-debug-djwfs\" (UID: \"c9fee662-a48c-488f-a561-aaaaec65440b\") " pod="openshift-must-gather-jw49p/crc-debug-djwfs" Feb 19 11:07:00 crc kubenswrapper[4811]: I0219 11:07:00.345544 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c9fee662-a48c-488f-a561-aaaaec65440b-host\") pod \"crc-debug-djwfs\" (UID: \"c9fee662-a48c-488f-a561-aaaaec65440b\") " pod="openshift-must-gather-jw49p/crc-debug-djwfs" Feb 19 11:07:00 crc kubenswrapper[4811]: I0219 11:07:00.367730 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj6q7\" (UniqueName: \"kubernetes.io/projected/c9fee662-a48c-488f-a561-aaaaec65440b-kube-api-access-vj6q7\") pod \"crc-debug-djwfs\" (UID: \"c9fee662-a48c-488f-a561-aaaaec65440b\") " pod="openshift-must-gather-jw49p/crc-debug-djwfs" Feb 19 11:07:00 crc kubenswrapper[4811]: I0219 11:07:00.501862 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jw49p/crc-debug-djwfs" Feb 19 11:07:00 crc kubenswrapper[4811]: W0219 11:07:00.538249 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9fee662_a48c_488f_a561_aaaaec65440b.slice/crio-c5129b129c69ef1662fa63755952626f9e33b967f7440773fec41ff827a5a6e6 WatchSource:0}: Error finding container c5129b129c69ef1662fa63755952626f9e33b967f7440773fec41ff827a5a6e6: Status 404 returned error can't find the container with id c5129b129c69ef1662fa63755952626f9e33b967f7440773fec41ff827a5a6e6 Feb 19 11:07:00 crc kubenswrapper[4811]: I0219 11:07:00.600859 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="294259e2-5afb-46a2-a602-36aadb1bb185" path="/var/lib/kubelet/pods/294259e2-5afb-46a2-a602-36aadb1bb185/volumes" Feb 19 11:07:00 crc kubenswrapper[4811]: I0219 11:07:00.780735 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jw49p/crc-debug-djwfs" event={"ID":"c9fee662-a48c-488f-a561-aaaaec65440b","Type":"ContainerStarted","Data":"c5129b129c69ef1662fa63755952626f9e33b967f7440773fec41ff827a5a6e6"} Feb 19 11:07:01 crc kubenswrapper[4811]: I0219 11:07:01.791803 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xjgrn" Feb 19 11:07:01 crc kubenswrapper[4811]: I0219 11:07:01.799330 4811 generic.go:334] "Generic (PLEG): container finished" podID="c9fee662-a48c-488f-a561-aaaaec65440b" containerID="82b0574a9eb1e4ae01e193dcfd05aa7cbdf74a5b86f5cded225cbfbb1d9bc833" exitCode=0 Feb 19 11:07:01 crc kubenswrapper[4811]: I0219 11:07:01.799391 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jw49p/crc-debug-djwfs" event={"ID":"c9fee662-a48c-488f-a561-aaaaec65440b","Type":"ContainerDied","Data":"82b0574a9eb1e4ae01e193dcfd05aa7cbdf74a5b86f5cded225cbfbb1d9bc833"} Feb 19 11:07:01 crc kubenswrapper[4811]: I0219 11:07:01.857328 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xjgrn" Feb 19 11:07:02 crc kubenswrapper[4811]: I0219 11:07:02.030213 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xjgrn"] Feb 19 11:07:02 crc kubenswrapper[4811]: I0219 11:07:02.325629 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jw49p/crc-debug-djwfs"] Feb 19 11:07:02 crc kubenswrapper[4811]: I0219 11:07:02.334684 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jw49p/crc-debug-djwfs"] Feb 19 11:07:02 crc kubenswrapper[4811]: I0219 11:07:02.935417 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jw49p/crc-debug-djwfs" Feb 19 11:07:03 crc kubenswrapper[4811]: I0219 11:07:03.098333 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vj6q7\" (UniqueName: \"kubernetes.io/projected/c9fee662-a48c-488f-a561-aaaaec65440b-kube-api-access-vj6q7\") pod \"c9fee662-a48c-488f-a561-aaaaec65440b\" (UID: \"c9fee662-a48c-488f-a561-aaaaec65440b\") " Feb 19 11:07:03 crc kubenswrapper[4811]: I0219 11:07:03.098633 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c9fee662-a48c-488f-a561-aaaaec65440b-host\") pod \"c9fee662-a48c-488f-a561-aaaaec65440b\" (UID: \"c9fee662-a48c-488f-a561-aaaaec65440b\") " Feb 19 11:07:03 crc kubenswrapper[4811]: I0219 11:07:03.098756 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9fee662-a48c-488f-a561-aaaaec65440b-host" (OuterVolumeSpecName: "host") pod "c9fee662-a48c-488f-a561-aaaaec65440b" (UID: "c9fee662-a48c-488f-a561-aaaaec65440b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 11:07:03 crc kubenswrapper[4811]: I0219 11:07:03.099255 4811 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c9fee662-a48c-488f-a561-aaaaec65440b-host\") on node \"crc\" DevicePath \"\"" Feb 19 11:07:03 crc kubenswrapper[4811]: I0219 11:07:03.105358 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9fee662-a48c-488f-a561-aaaaec65440b-kube-api-access-vj6q7" (OuterVolumeSpecName: "kube-api-access-vj6q7") pod "c9fee662-a48c-488f-a561-aaaaec65440b" (UID: "c9fee662-a48c-488f-a561-aaaaec65440b"). InnerVolumeSpecName "kube-api-access-vj6q7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:07:03 crc kubenswrapper[4811]: I0219 11:07:03.201110 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vj6q7\" (UniqueName: \"kubernetes.io/projected/c9fee662-a48c-488f-a561-aaaaec65440b-kube-api-access-vj6q7\") on node \"crc\" DevicePath \"\"" Feb 19 11:07:03 crc kubenswrapper[4811]: I0219 11:07:03.518857 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jw49p/crc-debug-jv4ds"] Feb 19 11:07:03 crc kubenswrapper[4811]: E0219 11:07:03.519616 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9fee662-a48c-488f-a561-aaaaec65440b" containerName="container-00" Feb 19 11:07:03 crc kubenswrapper[4811]: I0219 11:07:03.519645 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9fee662-a48c-488f-a561-aaaaec65440b" containerName="container-00" Feb 19 11:07:03 crc kubenswrapper[4811]: I0219 11:07:03.519916 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9fee662-a48c-488f-a561-aaaaec65440b" containerName="container-00" Feb 19 11:07:03 crc kubenswrapper[4811]: I0219 11:07:03.520802 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jw49p/crc-debug-jv4ds" Feb 19 11:07:03 crc kubenswrapper[4811]: I0219 11:07:03.609987 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1965e95a-5442-4307-928c-30008502c0a3-host\") pod \"crc-debug-jv4ds\" (UID: \"1965e95a-5442-4307-928c-30008502c0a3\") " pod="openshift-must-gather-jw49p/crc-debug-jv4ds" Feb 19 11:07:03 crc kubenswrapper[4811]: I0219 11:07:03.610091 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4hx8\" (UniqueName: \"kubernetes.io/projected/1965e95a-5442-4307-928c-30008502c0a3-kube-api-access-t4hx8\") pod \"crc-debug-jv4ds\" (UID: \"1965e95a-5442-4307-928c-30008502c0a3\") " pod="openshift-must-gather-jw49p/crc-debug-jv4ds" Feb 19 11:07:03 crc kubenswrapper[4811]: I0219 11:07:03.712528 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1965e95a-5442-4307-928c-30008502c0a3-host\") pod \"crc-debug-jv4ds\" (UID: \"1965e95a-5442-4307-928c-30008502c0a3\") " pod="openshift-must-gather-jw49p/crc-debug-jv4ds" Feb 19 11:07:03 crc kubenswrapper[4811]: I0219 11:07:03.712621 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4hx8\" (UniqueName: \"kubernetes.io/projected/1965e95a-5442-4307-928c-30008502c0a3-kube-api-access-t4hx8\") pod \"crc-debug-jv4ds\" (UID: \"1965e95a-5442-4307-928c-30008502c0a3\") " pod="openshift-must-gather-jw49p/crc-debug-jv4ds" Feb 19 11:07:03 crc kubenswrapper[4811]: I0219 11:07:03.712792 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1965e95a-5442-4307-928c-30008502c0a3-host\") pod \"crc-debug-jv4ds\" (UID: \"1965e95a-5442-4307-928c-30008502c0a3\") " pod="openshift-must-gather-jw49p/crc-debug-jv4ds" Feb 19 11:07:03 crc kubenswrapper[4811]: I0219 11:07:03.737542 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4hx8\" (UniqueName: \"kubernetes.io/projected/1965e95a-5442-4307-928c-30008502c0a3-kube-api-access-t4hx8\") pod \"crc-debug-jv4ds\" (UID: \"1965e95a-5442-4307-928c-30008502c0a3\") " pod="openshift-must-gather-jw49p/crc-debug-jv4ds" Feb 19 11:07:03 crc kubenswrapper[4811]: I0219 11:07:03.822748 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jw49p/crc-debug-djwfs" Feb 19 11:07:03 crc kubenswrapper[4811]: I0219 11:07:03.822791 4811 scope.go:117] "RemoveContainer" containerID="82b0574a9eb1e4ae01e193dcfd05aa7cbdf74a5b86f5cded225cbfbb1d9bc833" Feb 19 11:07:03 crc kubenswrapper[4811]: I0219 11:07:03.823023 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xjgrn" podUID="ef06d1a3-6404-4a0b-9522-2a3d7fe26490" containerName="registry-server" containerID="cri-o://4fe9c272e3c207589aeed4105f5527f1815879f978e65c1ae76df1065eaa0c6d" gracePeriod=2 Feb 19 11:07:03 crc kubenswrapper[4811]: I0219 11:07:03.843636 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jw49p/crc-debug-jv4ds" Feb 19 11:07:04 crc kubenswrapper[4811]: I0219 11:07:04.345547 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xjgrn" Feb 19 11:07:04 crc kubenswrapper[4811]: I0219 11:07:04.528347 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tphxs\" (UniqueName: \"kubernetes.io/projected/ef06d1a3-6404-4a0b-9522-2a3d7fe26490-kube-api-access-tphxs\") pod \"ef06d1a3-6404-4a0b-9522-2a3d7fe26490\" (UID: \"ef06d1a3-6404-4a0b-9522-2a3d7fe26490\") " Feb 19 11:07:04 crc kubenswrapper[4811]: I0219 11:07:04.528518 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef06d1a3-6404-4a0b-9522-2a3d7fe26490-catalog-content\") pod \"ef06d1a3-6404-4a0b-9522-2a3d7fe26490\" (UID: \"ef06d1a3-6404-4a0b-9522-2a3d7fe26490\") " Feb 19 11:07:04 crc kubenswrapper[4811]: I0219 11:07:04.528561 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef06d1a3-6404-4a0b-9522-2a3d7fe26490-utilities\") pod \"ef06d1a3-6404-4a0b-9522-2a3d7fe26490\" (UID: \"ef06d1a3-6404-4a0b-9522-2a3d7fe26490\") " Feb 19 11:07:04 crc kubenswrapper[4811]: I0219 11:07:04.529779 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef06d1a3-6404-4a0b-9522-2a3d7fe26490-utilities" (OuterVolumeSpecName: "utilities") pod "ef06d1a3-6404-4a0b-9522-2a3d7fe26490" (UID: "ef06d1a3-6404-4a0b-9522-2a3d7fe26490"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:07:04 crc kubenswrapper[4811]: I0219 11:07:04.536160 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef06d1a3-6404-4a0b-9522-2a3d7fe26490-kube-api-access-tphxs" (OuterVolumeSpecName: "kube-api-access-tphxs") pod "ef06d1a3-6404-4a0b-9522-2a3d7fe26490" (UID: "ef06d1a3-6404-4a0b-9522-2a3d7fe26490"). InnerVolumeSpecName "kube-api-access-tphxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:07:04 crc kubenswrapper[4811]: I0219 11:07:04.597056 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9fee662-a48c-488f-a561-aaaaec65440b" path="/var/lib/kubelet/pods/c9fee662-a48c-488f-a561-aaaaec65440b/volumes" Feb 19 11:07:04 crc kubenswrapper[4811]: I0219 11:07:04.631431 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tphxs\" (UniqueName: \"kubernetes.io/projected/ef06d1a3-6404-4a0b-9522-2a3d7fe26490-kube-api-access-tphxs\") on node \"crc\" DevicePath \"\"" Feb 19 11:07:04 crc kubenswrapper[4811]: I0219 11:07:04.631486 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef06d1a3-6404-4a0b-9522-2a3d7fe26490-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 11:07:04 crc kubenswrapper[4811]: I0219 11:07:04.663233 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef06d1a3-6404-4a0b-9522-2a3d7fe26490-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef06d1a3-6404-4a0b-9522-2a3d7fe26490" (UID: "ef06d1a3-6404-4a0b-9522-2a3d7fe26490"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:07:04 crc kubenswrapper[4811]: I0219 11:07:04.734375 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef06d1a3-6404-4a0b-9522-2a3d7fe26490-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 11:07:04 crc kubenswrapper[4811]: I0219 11:07:04.838078 4811 generic.go:334] "Generic (PLEG): container finished" podID="ef06d1a3-6404-4a0b-9522-2a3d7fe26490" containerID="4fe9c272e3c207589aeed4105f5527f1815879f978e65c1ae76df1065eaa0c6d" exitCode=0 Feb 19 11:07:04 crc kubenswrapper[4811]: I0219 11:07:04.838132 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xjgrn" Feb 19 11:07:04 crc kubenswrapper[4811]: I0219 11:07:04.838144 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xjgrn" event={"ID":"ef06d1a3-6404-4a0b-9522-2a3d7fe26490","Type":"ContainerDied","Data":"4fe9c272e3c207589aeed4105f5527f1815879f978e65c1ae76df1065eaa0c6d"} Feb 19 11:07:04 crc kubenswrapper[4811]: I0219 11:07:04.838202 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xjgrn" event={"ID":"ef06d1a3-6404-4a0b-9522-2a3d7fe26490","Type":"ContainerDied","Data":"bc01cf3a264669bb72a6ec774bf6df731730e1d3412ed5234724cd2c8fced7ee"} Feb 19 11:07:04 crc kubenswrapper[4811]: I0219 11:07:04.838228 4811 scope.go:117] "RemoveContainer" containerID="4fe9c272e3c207589aeed4105f5527f1815879f978e65c1ae76df1065eaa0c6d" Feb 19 11:07:04 crc kubenswrapper[4811]: I0219 11:07:04.845117 4811 generic.go:334] "Generic (PLEG): container finished" podID="1965e95a-5442-4307-928c-30008502c0a3" containerID="658bbc8df1d5f71b2b5e258f111c49bcb0d7ded014d79691afa917227d3c0245" exitCode=0 Feb 19 11:07:04 crc kubenswrapper[4811]: I0219 11:07:04.845217 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jw49p/crc-debug-jv4ds" event={"ID":"1965e95a-5442-4307-928c-30008502c0a3","Type":"ContainerDied","Data":"658bbc8df1d5f71b2b5e258f111c49bcb0d7ded014d79691afa917227d3c0245"} Feb 19 11:07:04 crc kubenswrapper[4811]: I0219 11:07:04.845300 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jw49p/crc-debug-jv4ds" event={"ID":"1965e95a-5442-4307-928c-30008502c0a3","Type":"ContainerStarted","Data":"fa0760bd472557f096d341db86adc4e3c67bc16062b711f076744af9e133ae23"} Feb 19 11:07:04 crc kubenswrapper[4811]: I0219 11:07:04.902354 4811 scope.go:117] "RemoveContainer" containerID="07aa2c57645f23f8d38dc9c66a9e7cb22a943cb393bb259aa3a1ed39fff41712" Feb 19 11:07:04 crc kubenswrapper[4811]: I0219 11:07:04.925522 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xjgrn"] Feb 19 11:07:04 crc kubenswrapper[4811]: I0219 11:07:04.938633 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xjgrn"] Feb 19 11:07:04 crc kubenswrapper[4811]: I0219 11:07:04.980403 4811 scope.go:117] "RemoveContainer" containerID="89333a06e65ed0fb6add81ee1757a74cac846410dbbc6873109638a904cd9d04" Feb 19 11:07:05 crc kubenswrapper[4811]: I0219 11:07:05.017187 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jw49p/crc-debug-jv4ds"] Feb 19 11:07:05 crc kubenswrapper[4811]: I0219 11:07:05.023510 4811 scope.go:117] "RemoveContainer" containerID="4fe9c272e3c207589aeed4105f5527f1815879f978e65c1ae76df1065eaa0c6d" Feb 19 11:07:05 crc kubenswrapper[4811]: E0219 11:07:05.026027 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fe9c272e3c207589aeed4105f5527f1815879f978e65c1ae76df1065eaa0c6d\": container with ID starting with 4fe9c272e3c207589aeed4105f5527f1815879f978e65c1ae76df1065eaa0c6d not found: ID does not exist" containerID="4fe9c272e3c207589aeed4105f5527f1815879f978e65c1ae76df1065eaa0c6d" Feb 19 11:07:05 crc kubenswrapper[4811]: I0219 11:07:05.026090 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fe9c272e3c207589aeed4105f5527f1815879f978e65c1ae76df1065eaa0c6d"} err="failed to get container status \"4fe9c272e3c207589aeed4105f5527f1815879f978e65c1ae76df1065eaa0c6d\": rpc error: code = NotFound desc = could not find container \"4fe9c272e3c207589aeed4105f5527f1815879f978e65c1ae76df1065eaa0c6d\": container with ID starting with 4fe9c272e3c207589aeed4105f5527f1815879f978e65c1ae76df1065eaa0c6d not found: ID does not exist" Feb 19 11:07:05 crc kubenswrapper[4811]: I0219 11:07:05.026145 4811 scope.go:117] "RemoveContainer" containerID="07aa2c57645f23f8d38dc9c66a9e7cb22a943cb393bb259aa3a1ed39fff41712" Feb 19 11:07:05 crc kubenswrapper[4811]: I0219 11:07:05.027196 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jw49p/crc-debug-jv4ds"] Feb 19 11:07:05 crc kubenswrapper[4811]: E0219 11:07:05.027681 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07aa2c57645f23f8d38dc9c66a9e7cb22a943cb393bb259aa3a1ed39fff41712\": container with ID starting with 07aa2c57645f23f8d38dc9c66a9e7cb22a943cb393bb259aa3a1ed39fff41712 not found: ID does not exist" containerID="07aa2c57645f23f8d38dc9c66a9e7cb22a943cb393bb259aa3a1ed39fff41712" Feb 19 11:07:05 crc kubenswrapper[4811]: I0219 11:07:05.027738 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07aa2c57645f23f8d38dc9c66a9e7cb22a943cb393bb259aa3a1ed39fff41712"} err="failed to get container status \"07aa2c57645f23f8d38dc9c66a9e7cb22a943cb393bb259aa3a1ed39fff41712\": rpc error: code = NotFound desc = could not find container \"07aa2c57645f23f8d38dc9c66a9e7cb22a943cb393bb259aa3a1ed39fff41712\": container with ID starting with 07aa2c57645f23f8d38dc9c66a9e7cb22a943cb393bb259aa3a1ed39fff41712 not found: ID does not exist" Feb 19 11:07:05 crc kubenswrapper[4811]: I0219 11:07:05.027777 4811 scope.go:117] "RemoveContainer" containerID="89333a06e65ed0fb6add81ee1757a74cac846410dbbc6873109638a904cd9d04" Feb 19 11:07:05 crc kubenswrapper[4811]: E0219 11:07:05.028263 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89333a06e65ed0fb6add81ee1757a74cac846410dbbc6873109638a904cd9d04\": container with ID starting with 89333a06e65ed0fb6add81ee1757a74cac846410dbbc6873109638a904cd9d04 not found: ID does not exist" containerID="89333a06e65ed0fb6add81ee1757a74cac846410dbbc6873109638a904cd9d04" Feb 19 11:07:05 crc kubenswrapper[4811]: I0219 11:07:05.028301 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89333a06e65ed0fb6add81ee1757a74cac846410dbbc6873109638a904cd9d04"} err="failed to get container status \"89333a06e65ed0fb6add81ee1757a74cac846410dbbc6873109638a904cd9d04\": rpc error: code = NotFound desc = could not find container \"89333a06e65ed0fb6add81ee1757a74cac846410dbbc6873109638a904cd9d04\": container with ID starting with 89333a06e65ed0fb6add81ee1757a74cac846410dbbc6873109638a904cd9d04 not found: ID does not exist" Feb 19 11:07:05 crc kubenswrapper[4811]: I0219 11:07:05.983964 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jw49p/crc-debug-jv4ds" Feb 19 11:07:06 crc kubenswrapper[4811]: I0219 11:07:06.164141 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1965e95a-5442-4307-928c-30008502c0a3-host\") pod \"1965e95a-5442-4307-928c-30008502c0a3\" (UID: \"1965e95a-5442-4307-928c-30008502c0a3\") " Feb 19 11:07:06 crc kubenswrapper[4811]: I0219 11:07:06.164272 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1965e95a-5442-4307-928c-30008502c0a3-host" (OuterVolumeSpecName: "host") pod "1965e95a-5442-4307-928c-30008502c0a3" (UID: "1965e95a-5442-4307-928c-30008502c0a3"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 11:07:06 crc kubenswrapper[4811]: I0219 11:07:06.164311 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4hx8\" (UniqueName: \"kubernetes.io/projected/1965e95a-5442-4307-928c-30008502c0a3-kube-api-access-t4hx8\") pod \"1965e95a-5442-4307-928c-30008502c0a3\" (UID: \"1965e95a-5442-4307-928c-30008502c0a3\") " Feb 19 11:07:06 crc kubenswrapper[4811]: I0219 11:07:06.164986 4811 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1965e95a-5442-4307-928c-30008502c0a3-host\") on node \"crc\" DevicePath \"\"" Feb 19 11:07:06 crc kubenswrapper[4811]: I0219 11:07:06.173766 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1965e95a-5442-4307-928c-30008502c0a3-kube-api-access-t4hx8" (OuterVolumeSpecName: "kube-api-access-t4hx8") pod "1965e95a-5442-4307-928c-30008502c0a3" (UID: "1965e95a-5442-4307-928c-30008502c0a3"). InnerVolumeSpecName "kube-api-access-t4hx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:07:06 crc kubenswrapper[4811]: I0219 11:07:06.266771 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4hx8\" (UniqueName: \"kubernetes.io/projected/1965e95a-5442-4307-928c-30008502c0a3-kube-api-access-t4hx8\") on node \"crc\" DevicePath \"\"" Feb 19 11:07:06 crc kubenswrapper[4811]: I0219 11:07:06.595053 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1965e95a-5442-4307-928c-30008502c0a3" path="/var/lib/kubelet/pods/1965e95a-5442-4307-928c-30008502c0a3/volumes" Feb 19 11:07:06 crc kubenswrapper[4811]: I0219 11:07:06.595722 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef06d1a3-6404-4a0b-9522-2a3d7fe26490" path="/var/lib/kubelet/pods/ef06d1a3-6404-4a0b-9522-2a3d7fe26490/volumes" Feb 19 11:07:06 crc kubenswrapper[4811]: I0219 11:07:06.884848 4811 scope.go:117] "RemoveContainer" containerID="658bbc8df1d5f71b2b5e258f111c49bcb0d7ded014d79691afa917227d3c0245" Feb 19 11:07:06 crc kubenswrapper[4811]: I0219 11:07:06.884892 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jw49p/crc-debug-jv4ds" Feb 19 11:07:21 crc kubenswrapper[4811]: I0219 11:07:21.235336 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6566bc4c4b-lsrcb_39389157-f52c-4caa-85a5-e2306e903180/barbican-api/0.log" Feb 19 11:07:21 crc kubenswrapper[4811]: I0219 11:07:21.485900 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6566bc4c4b-lsrcb_39389157-f52c-4caa-85a5-e2306e903180/barbican-api-log/0.log" Feb 19 11:07:21 crc kubenswrapper[4811]: I0219 11:07:21.744416 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-594c4b4cb6-89gd8_51d8831e-3334-4e4a-84b1-98912b1dc6f2/barbican-keystone-listener/0.log" Feb 19 11:07:21 crc kubenswrapper[4811]: I0219 11:07:21.764261 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-594c4b4cb6-89gd8_51d8831e-3334-4e4a-84b1-98912b1dc6f2/barbican-keystone-listener-log/0.log" Feb 19 11:07:21 crc kubenswrapper[4811]: I0219 11:07:21.865942 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7cc994bfc5-bf96r_20451628-f7b0-43b3-b0b9-4ab828db8a77/barbican-worker/0.log" Feb 19 11:07:21 crc kubenswrapper[4811]: I0219 11:07:21.995088 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7cc994bfc5-bf96r_20451628-f7b0-43b3-b0b9-4ab828db8a77/barbican-worker-log/0.log" Feb 19 11:07:22 crc kubenswrapper[4811]: I0219 11:07:22.091385 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-97bxz_7ce25cb1-51ce-4643-9fac-e7fa4b1afce9/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:07:22 crc kubenswrapper[4811]: I0219 11:07:22.211349 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cd0891f5-d119-48e2-8e7e-1ddfa51417a9/ceilometer-central-agent/0.log" Feb 19 11:07:22 crc kubenswrapper[4811]: I0219 11:07:22.228224 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cd0891f5-d119-48e2-8e7e-1ddfa51417a9/ceilometer-notification-agent/0.log" Feb 19 11:07:22 crc kubenswrapper[4811]: I0219 11:07:22.361811 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cd0891f5-d119-48e2-8e7e-1ddfa51417a9/proxy-httpd/0.log" Feb 19 11:07:22 crc kubenswrapper[4811]: I0219 11:07:22.409161 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cd0891f5-d119-48e2-8e7e-1ddfa51417a9/sg-core/0.log" Feb 19 11:07:22 crc kubenswrapper[4811]: I0219 11:07:22.523747 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_ebb19588-7922-460c-9368-05aaf6706b52/cinder-api/0.log" Feb 19 11:07:22 crc kubenswrapper[4811]: I0219 11:07:22.649178 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_ebb19588-7922-460c-9368-05aaf6706b52/cinder-api-log/0.log" Feb 19 11:07:22 crc kubenswrapper[4811]: I0219 11:07:22.706751 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed/cinder-scheduler/0.log" Feb 19 11:07:22 crc kubenswrapper[4811]: I0219 11:07:22.787178 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed/probe/0.log" Feb 19 11:07:22 crc kubenswrapper[4811]: I0219 11:07:22.946502 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-88tdb_ddf9ccc5-ba24-4253-93d5-1c2024978965/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:07:23 crc kubenswrapper[4811]: I0219 11:07:23.013778 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-s6crt_0409deb0-2253-48f7-8f17-3bf3c70088ef/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:07:23 crc kubenswrapper[4811]: I0219 11:07:23.216974 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-xm8bw_deca690b-51f2-4ee0-adbb-d7dfaa165fcc/init/0.log" Feb 19 11:07:23 crc kubenswrapper[4811]: I0219 11:07:23.395196 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-qwdvx_57a5bec4-f1ad-45e6-9045-dea45abd7ccf/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:07:23 crc kubenswrapper[4811]: I0219 11:07:23.457159 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-xm8bw_deca690b-51f2-4ee0-adbb-d7dfaa165fcc/dnsmasq-dns/0.log" Feb 19 11:07:23 crc kubenswrapper[4811]: I0219 11:07:23.495183 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-xm8bw_deca690b-51f2-4ee0-adbb-d7dfaa165fcc/init/0.log" Feb 19 11:07:23 crc kubenswrapper[4811]: I0219 11:07:23.679637 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_5a784552-f4b7-42d3-a0b7-d223fbe3818c/glance-httpd/0.log" Feb 19 11:07:23 crc kubenswrapper[4811]: I0219 11:07:23.696891 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_5a784552-f4b7-42d3-a0b7-d223fbe3818c/glance-log/0.log" Feb 19 11:07:23 crc kubenswrapper[4811]: I0219 11:07:23.910231 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7/glance-httpd/0.log" Feb 19 11:07:23 crc kubenswrapper[4811]: I0219 11:07:23.954323 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7/glance-log/0.log" Feb 19 11:07:24 crc kubenswrapper[4811]: I0219 11:07:24.022532 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-676dcb6496-tnhmt_19d17501-8d5f-4f9d-8fad-94f48224e910/horizon/1.log" Feb 19 11:07:24 crc kubenswrapper[4811]: I0219 11:07:24.270160 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-676dcb6496-tnhmt_19d17501-8d5f-4f9d-8fad-94f48224e910/horizon/0.log" Feb 19 11:07:24 crc kubenswrapper[4811]: I0219 11:07:24.276090 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8_2d303b5a-179e-407e-8dae-2eab291880d5/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:07:24 crc kubenswrapper[4811]: I0219 11:07:24.494997 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-676dcb6496-tnhmt_19d17501-8d5f-4f9d-8fad-94f48224e910/horizon-log/0.log" Feb 19 11:07:24 crc kubenswrapper[4811]: I0219 11:07:24.526492 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-kg62s_4ed42933-309e-4fe0-874b-7ec5fb67d807/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:07:24 crc kubenswrapper[4811]: I0219 11:07:24.865029 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-54bd9d4f8c-p9wzd_88141b63-6cd8-4294-a7dc-a2104eb13eb4/keystone-api/0.log" Feb 19 11:07:24 crc kubenswrapper[4811]: I0219 11:07:24.865182 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29524981-j4dnl_58f1765b-3836-4257-aa59-d47c7464c1e5/keystone-cron/0.log" Feb 19 11:07:25 crc kubenswrapper[4811]: I0219 11:07:25.264173 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_5f41cd68-0e1f-48df-a830-1c581d0cfe9e/kube-state-metrics/0.log" Feb 19 11:07:25 crc kubenswrapper[4811]: I0219 11:07:25.326784 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-flfd2_177810e6-4482-4c82-ac9a-721e76c63579/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:07:25 crc kubenswrapper[4811]: I0219 11:07:25.673651 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-599475cf85-sh7k6_70197607-b202-4d15-a797-ef5d138c5502/neutron-api/0.log" Feb 19 11:07:25 crc kubenswrapper[4811]: I0219 11:07:25.840503 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-599475cf85-sh7k6_70197607-b202-4d15-a797-ef5d138c5502/neutron-httpd/0.log" Feb 19 11:07:25 crc kubenswrapper[4811]: I0219 11:07:25.840562 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz_a207452e-4fab-4a38-be85-31bc84e8d6fc/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:07:26 crc kubenswrapper[4811]: I0219 11:07:26.444724 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_b92a06d2-fab4-462e-ae54-2dd7087ad94a/nova-cell0-conductor-conductor/0.log" Feb 19 11:07:26 crc kubenswrapper[4811]: I0219 11:07:26.503774 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2746bf91-50ba-4dc9-a6cf-1f7117f48ee2/nova-api-log/0.log" Feb 19 11:07:26 crc kubenswrapper[4811]: I0219 11:07:26.671155 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2746bf91-50ba-4dc9-a6cf-1f7117f48ee2/nova-api-api/0.log" Feb 19 11:07:26 crc kubenswrapper[4811]: I0219 11:07:26.818543 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_66690abb-6530-48ba-ab47-36369217c473/nova-cell1-novncproxy-novncproxy/0.log" Feb 19 11:07:26 crc kubenswrapper[4811]: I0219 11:07:26.824346 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_10c31d4c-29cf-4dec-8cf1-98f5f1e804a8/nova-cell1-conductor-conductor/0.log" Feb 19 11:07:27 crc kubenswrapper[4811]: I0219 11:07:27.016344 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-6djpr_3aebcdfb-c20a-447c-9b30-fa8f749bbf90/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:07:27 crc kubenswrapper[4811]: I0219 11:07:27.165517 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2e32654e-c645-4e60-9879-73ed179fa105/nova-metadata-log/0.log" Feb 19 11:07:27 crc kubenswrapper[4811]: I0219 11:07:27.618465 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_354e40bc-0a2b-4654-873c-30b6293d6192/mysql-bootstrap/0.log" Feb 19 11:07:27 crc kubenswrapper[4811]: I0219 11:07:27.620273 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_362f5b81-de68-4bce-824c-734e9d18f1f1/nova-scheduler-scheduler/0.log" Feb 19 11:07:27 crc kubenswrapper[4811]: I0219 11:07:27.830538 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_354e40bc-0a2b-4654-873c-30b6293d6192/galera/0.log" Feb 19 11:07:27 crc kubenswrapper[4811]: I0219 11:07:27.893657 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_354e40bc-0a2b-4654-873c-30b6293d6192/mysql-bootstrap/0.log" Feb 19 11:07:28 crc kubenswrapper[4811]: I0219 11:07:28.075778 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6/mysql-bootstrap/0.log" Feb 19 11:07:28 crc kubenswrapper[4811]: I0219 11:07:28.221745 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6/mysql-bootstrap/0.log" Feb 19 11:07:28 crc kubenswrapper[4811]: I0219 11:07:28.296349 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6/galera/0.log" Feb 19 11:07:28 crc kubenswrapper[4811]: I0219 11:07:28.417204 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2e32654e-c645-4e60-9879-73ed179fa105/nova-metadata-metadata/0.log" Feb 19 11:07:28 crc kubenswrapper[4811]: I0219 11:07:28.432058 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_c6494cd0-c929-4278-974b-0839193092f1/openstackclient/0.log" Feb 19 11:07:28 crc kubenswrapper[4811]: I0219 11:07:28.635263 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-qwhjn_764be2bf-8d0b-4783-afcd-de76bac7ce34/openstack-network-exporter/0.log" Feb 19 11:07:28 crc kubenswrapper[4811]: I0219 11:07:28.646185 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-mvhxm_00337a1f-39d0-4583-b35b-a50aeb572f60/ovn-controller/0.log" Feb 19 11:07:29 crc kubenswrapper[4811]: I0219 11:07:29.089136 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bddvk_a76ec0f0-9b42-4e48-bd3f-68a74161cc4c/ovsdb-server-init/0.log" Feb 19 11:07:29 crc kubenswrapper[4811]: I0219 11:07:29.267343 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bddvk_a76ec0f0-9b42-4e48-bd3f-68a74161cc4c/ovsdb-server-init/0.log" Feb 19 11:07:29 crc kubenswrapper[4811]: I0219 11:07:29.279730 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bddvk_a76ec0f0-9b42-4e48-bd3f-68a74161cc4c/ovsdb-server/0.log" Feb 19 11:07:29 crc kubenswrapper[4811]: I0219 11:07:29.336547 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bddvk_a76ec0f0-9b42-4e48-bd3f-68a74161cc4c/ovs-vswitchd/0.log" Feb 19 11:07:29 crc kubenswrapper[4811]: I0219 11:07:29.505503 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-m9spn_f9579646-1f18-4b3b-af75-9c5af0107c79/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:07:29 crc kubenswrapper[4811]: I0219 11:07:29.575157 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_195bcb34-7bf7-45d0-92d4-0fb83788222c/openstack-network-exporter/0.log" Feb 19 11:07:29 crc kubenswrapper[4811]: I0219 11:07:29.594216 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_195bcb34-7bf7-45d0-92d4-0fb83788222c/ovn-northd/0.log" Feb 19 11:07:29 crc kubenswrapper[4811]: I0219 11:07:29.798793 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2e3fed93-4b85-4e79-a9d3-795dbd7695ba/openstack-network-exporter/0.log" Feb 19 11:07:29 crc kubenswrapper[4811]: I0219 11:07:29.817561 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2e3fed93-4b85-4e79-a9d3-795dbd7695ba/ovsdbserver-nb/0.log" Feb 19 11:07:30 crc kubenswrapper[4811]: I0219 11:07:30.037177 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1a9b5380-d587-4d65-8ddb-b6c2faef624a/openstack-network-exporter/0.log" Feb 19 11:07:30 crc kubenswrapper[4811]: I0219 11:07:30.096141 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1a9b5380-d587-4d65-8ddb-b6c2faef624a/ovsdbserver-sb/0.log" Feb 19 11:07:30 crc kubenswrapper[4811]: I0219 11:07:30.189059 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6df884f584-x4z2k_f7db31a7-4566-4027-81f8-af4496f7dc07/placement-api/0.log" Feb 19 11:07:30 crc kubenswrapper[4811]: I0219 11:07:30.363041 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_44cb8367-2f57-4e24-9d48-b2d22a39d128/setup-container/0.log" Feb 19 11:07:30 crc kubenswrapper[4811]: I0219 11:07:30.383096 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6df884f584-x4z2k_f7db31a7-4566-4027-81f8-af4496f7dc07/placement-log/0.log" Feb 19 11:07:30 crc kubenswrapper[4811]: I0219 11:07:30.629137 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_44cb8367-2f57-4e24-9d48-b2d22a39d128/setup-container/0.log" Feb 19 11:07:30 crc kubenswrapper[4811]: I0219 11:07:30.665779 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c8683fae-0b62-402d-bec3-5c3be97c8c98/setup-container/0.log" Feb 19 11:07:30 crc kubenswrapper[4811]: I0219 11:07:30.724869 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_44cb8367-2f57-4e24-9d48-b2d22a39d128/rabbitmq/0.log" Feb 19 11:07:30 crc kubenswrapper[4811]: I0219 11:07:30.874905 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c8683fae-0b62-402d-bec3-5c3be97c8c98/setup-container/0.log" Feb 19 11:07:30 crc kubenswrapper[4811]: I0219 11:07:30.902393 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c8683fae-0b62-402d-bec3-5c3be97c8c98/rabbitmq/0.log" Feb 19 11:07:30 crc kubenswrapper[4811]: I0219 11:07:30.966502 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-br5xl_6563b2b6-b15d-48ca-9c1f-75f9f17321c9/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:07:31 crc kubenswrapper[4811]: I0219 11:07:31.185670 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-5sclt_5b78bd06-9aa5-4755-be4c-ee3d85655501/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:07:31 crc kubenswrapper[4811]: I0219 11:07:31.195922 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-bk8sg_36d956c8-5142-4ec4-b5fb-622c05d0e097/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:07:31 crc kubenswrapper[4811]: I0219 11:07:31.416759 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-pdvdx_b972495c-f6be-4f80-81e5-ac652cf40950/ssh-known-hosts-edpm-deployment/0.log" Feb 19 11:07:31 crc kubenswrapper[4811]: I0219 11:07:31.441945 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-mcrbf_6b88198f-8c56-4112-956d-2361512d5f2e/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:07:31 crc kubenswrapper[4811]: I0219 11:07:31.654372 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-666bdb77b9-x6mrc_0beeae9e-3b17-4691-8273-95fbc4be85fb/proxy-server/0.log" Feb 19 11:07:31 crc kubenswrapper[4811]: I0219 11:07:31.764055 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-666bdb77b9-x6mrc_0beeae9e-3b17-4691-8273-95fbc4be85fb/proxy-httpd/0.log" Feb 19 11:07:31 crc kubenswrapper[4811]: I0219 11:07:31.854398 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-r67q5_30bb0370-e0d8-4aaf-8a07-35ab4af890c6/swift-ring-rebalance/0.log" Feb 19 11:07:31 crc kubenswrapper[4811]: I0219 11:07:31.966118 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_dce922ee-7833-4b43-9358-72aa7da07a03/account-auditor/0.log" Feb 19 11:07:31 crc kubenswrapper[4811]: I0219 11:07:31.994926 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_dce922ee-7833-4b43-9358-72aa7da07a03/account-reaper/0.log" Feb 19 11:07:32 crc kubenswrapper[4811]: I0219 11:07:32.108026 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_dce922ee-7833-4b43-9358-72aa7da07a03/account-replicator/0.log" Feb 19 11:07:32 crc kubenswrapper[4811]: I0219 11:07:32.160227 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_dce922ee-7833-4b43-9358-72aa7da07a03/account-server/0.log" Feb 19 11:07:32 crc kubenswrapper[4811]: I0219 11:07:32.231252 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_dce922ee-7833-4b43-9358-72aa7da07a03/container-replicator/0.log" Feb 19 11:07:32 crc kubenswrapper[4811]: I0219 11:07:32.250430 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_dce922ee-7833-4b43-9358-72aa7da07a03/container-auditor/0.log" Feb 19 11:07:32 crc kubenswrapper[4811]: I0219 11:07:32.379931 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_dce922ee-7833-4b43-9358-72aa7da07a03/container-updater/0.log" Feb 19 11:07:32 crc kubenswrapper[4811]: I0219 11:07:32.393032 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_dce922ee-7833-4b43-9358-72aa7da07a03/container-server/0.log" Feb 19 11:07:32 crc kubenswrapper[4811]: I0219 11:07:32.478622 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_dce922ee-7833-4b43-9358-72aa7da07a03/object-auditor/0.log" Feb 19 11:07:32 crc kubenswrapper[4811]: I0219 11:07:32.479592 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_dce922ee-7833-4b43-9358-72aa7da07a03/object-expirer/0.log" Feb 19 11:07:32 crc kubenswrapper[4811]: I0219 11:07:32.646257 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_dce922ee-7833-4b43-9358-72aa7da07a03/object-replicator/0.log" Feb 19 11:07:32 crc kubenswrapper[4811]: I0219 11:07:32.648092 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_dce922ee-7833-4b43-9358-72aa7da07a03/object-server/0.log" Feb 19 11:07:32 crc kubenswrapper[4811]: I0219 11:07:32.757205 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_dce922ee-7833-4b43-9358-72aa7da07a03/rsync/0.log" Feb 19 11:07:32 crc kubenswrapper[4811]: I0219 11:07:32.766659 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_dce922ee-7833-4b43-9358-72aa7da07a03/object-updater/0.log" Feb 19 11:07:32 crc kubenswrapper[4811]: I0219 11:07:32.871347 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_dce922ee-7833-4b43-9358-72aa7da07a03/swift-recon-cron/0.log" Feb 19 11:07:33 crc kubenswrapper[4811]: I0219 11:07:33.065955 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx_2017b43a-f947-4d28-97b7-5cf3d5d0cc1b/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:07:33 crc kubenswrapper[4811]: I0219 11:07:33.111840 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_4e965f02-705b-4631-879f-27070f0bfa70/tempest-tests-tempest-tests-runner/0.log" Feb 19 11:07:33 crc kubenswrapper[4811]: I0219 11:07:33.236719 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_a0b6d7a4-d66c-4621-9fd1-bb45c0d5860b/test-operator-logs-container/0.log" Feb 19 11:07:33 crc kubenswrapper[4811]: I0219 11:07:33.364758 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-vxwxz_6df41748-d8bb-49c9-aa6f-403827039ae7/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:07:41 crc kubenswrapper[4811]: I0219 11:07:41.711198 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_c2ff968e-6341-43f8-bf15-21009d76f661/memcached/0.log" Feb 19 11:07:56 crc kubenswrapper[4811]: I0219 11:07:56.788365 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 11:07:56 crc kubenswrapper[4811]: I0219 11:07:56.788970 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 11:08:00 crc kubenswrapper[4811]: I0219 11:08:00.633279 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb_d439e144-6344-450b-bf1c-d141a94619d0/util/0.log" Feb 19 11:08:00 crc kubenswrapper[4811]: I0219 11:08:00.791224 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb_d439e144-6344-450b-bf1c-d141a94619d0/util/0.log" Feb 19 11:08:00 crc kubenswrapper[4811]: I0219 11:08:00.851996 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb_d439e144-6344-450b-bf1c-d141a94619d0/pull/0.log" Feb 19 11:08:00 crc kubenswrapper[4811]: I0219 11:08:00.862053 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb_d439e144-6344-450b-bf1c-d141a94619d0/pull/0.log" Feb 19 11:08:01 crc kubenswrapper[4811]: I0219 11:08:01.083220 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb_d439e144-6344-450b-bf1c-d141a94619d0/pull/0.log" Feb 19 11:08:01 crc kubenswrapper[4811]: I0219 11:08:01.105591 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb_d439e144-6344-450b-bf1c-d141a94619d0/util/0.log" Feb 19 11:08:01 crc kubenswrapper[4811]: I0219 11:08:01.135781 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb_d439e144-6344-450b-bf1c-d141a94619d0/extract/0.log" Feb 19 11:08:01 crc kubenswrapper[4811]: I0219 11:08:01.557477 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-dzgff_f5e56e20-0181-4f50-a112-b90aeb0ea9f4/manager/0.log" Feb 19 11:08:01 crc kubenswrapper[4811]: I0219 11:08:01.896209 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-wh6gz_f39b017a-34d2-491b-a44a-c3fc5ea573b7/manager/0.log" Feb 19 11:08:02 crc kubenswrapper[4811]: I0219 11:08:02.018684 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-9fqp5_5bf4b28e-b5bd-4caa-a07e-b35c20fa16e7/manager/0.log" Feb 19 11:08:02 crc kubenswrapper[4811]: I0219 11:08:02.250545 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-cjqjn_3c49a030-0a6a-434c-986e-6957d9d27043/manager/0.log" Feb 19 11:08:02 crc kubenswrapper[4811]: I0219 11:08:02.811089 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-d66mq_46b8deed-b9de-4b32-b8e4-c3edc4ad97f9/manager/0.log" Feb 19 11:08:02 crc kubenswrapper[4811]: I0219 11:08:02.846488 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-c84zh_f7a1ee5c-f40c-4b32-9ff9-46d3a11f64a3/manager/0.log" Feb 19 11:08:03 crc kubenswrapper[4811]: I0219 11:08:03.030059 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-bqpcc_6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58/manager/0.log" Feb 19 11:08:03 crc kubenswrapper[4811]: I0219 11:08:03.134308 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-fvdc7_b5e1b149-0349-4c1c-be95-b9c540da49f1/manager/0.log" Feb 19 11:08:03 crc kubenswrapper[4811]: I0219 11:08:03.273484 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-f64jn_8841254d-fb8a-4391-984a-02a39949ca31/manager/0.log" Feb 19 11:08:03 crc kubenswrapper[4811]: I0219 11:08:03.558584 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-frxgp_fe03e75a-b898-4de2-8569-bce0add53798/manager/0.log" Feb 19 11:08:03 crc kubenswrapper[4811]: I0219 11:08:03.669769 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-rdtd4_03bff346-c835-4fba-9950-59e9e86d8a23/manager/0.log" Feb 19 11:08:04 crc kubenswrapper[4811]: I0219 11:08:04.009263 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-2lvw2_6c3aab50-1956-41f1-be69-aa976020f25e/manager/0.log" Feb 19 11:08:04 crc kubenswrapper[4811]: I0219 11:08:04.241741 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv_484c4e4b-0de6-427c-adde-b597fed189a7/manager/0.log" Feb 19 11:08:04 crc kubenswrapper[4811]: I0219 11:08:04.789274 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-d974764c4-flc2t_540975d2-b9cd-492e-9618-c4d31baf74a6/operator/0.log" Feb 19 11:08:05 crc kubenswrapper[4811]: I0219 11:08:05.008620 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-9fpk8_f23b3244-ec1f-411f-8dd2-15513a0c6c17/registry-server/0.log" Feb 19 11:08:05 crc kubenswrapper[4811]: I0219 11:08:05.291809 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-h5g6x_ed315fb7-c37f-4d65-a048-78c1a354ceb3/manager/0.log" Feb 19 11:08:05 crc kubenswrapper[4811]: I0219 11:08:05.557697 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-4tg5n_0dd03f3f-2d9f-433e-942f-ebed08429ccf/manager/0.log" Feb 19 11:08:05 crc kubenswrapper[4811]: I0219 11:08:05.589728 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-jzwmv_34488f24-fe02-449f-a343-288ee29f5ec4/manager/0.log" Feb 19 11:08:05 crc kubenswrapper[4811]: I0219 11:08:05.791783 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-5vw7t_9def098d-8e58-4145-a851-134a7b2620b5/operator/0.log" Feb 19 11:08:05 crc kubenswrapper[4811]: I0219 11:08:05.904731 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-6f8zw_058e8da6-f607-4153-bb9e-af486f6c87e7/manager/0.log" Feb 19 11:08:06 crc kubenswrapper[4811]: I0219 11:08:06.079291 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f45b4ff68-456kj_eab9f83b-209d-4688-8c7f-07840bae01ff/manager/0.log" Feb 19 11:08:06 crc kubenswrapper[4811]: I0219 11:08:06.323200 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-8xp7b_75dd64a6-abdc-4d68-b9ca-abe41484b894/manager/0.log" Feb 19 11:08:06 crc kubenswrapper[4811]: I0219 11:08:06.449551 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-czvjr_b4dc2b86-5147-4420-a9ef-a2ef94991ed3/manager/0.log" Feb 19 11:08:06 crc kubenswrapper[4811]: I0219 11:08:06.835158 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7b558f465f-z4vhq_7f4a5782-be5f-4a0d-be5d-b8a5014c4711/manager/0.log" Feb 19 11:08:08 crc kubenswrapper[4811]: I0219 11:08:08.811061 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-s42cl_57410614-54e3-49cb-8afe-ee60a861f161/manager/0.log" Feb 19 11:08:26 crc kubenswrapper[4811]: I0219 11:08:26.522708 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-84dtz_1fcdf22f-c384-45cd-a738-a6ec8a9d181e/control-plane-machine-set-operator/0.log" Feb 19 11:08:26 crc kubenswrapper[4811]: I0219 11:08:26.736879 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-bf2qf_077abdec-a39e-43d3-bbba-8dd0afbb448b/kube-rbac-proxy/0.log" Feb 19 11:08:26 crc kubenswrapper[4811]: I0219 11:08:26.788485 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 11:08:26 crc kubenswrapper[4811]: I0219 11:08:26.788563 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 11:08:26 crc kubenswrapper[4811]: I0219 11:08:26.793188 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-bf2qf_077abdec-a39e-43d3-bbba-8dd0afbb448b/machine-api-operator/0.log" Feb 19 11:08:39 crc kubenswrapper[4811]: I0219 11:08:39.129332 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-npddf_09c94fb6-b7d0-434f-8f41-d8dc7ff56e76/cert-manager-controller/0.log" Feb 19 11:08:39 crc kubenswrapper[4811]: I0219 11:08:39.264207 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-wh2j5_c5c3873a-d5b3-4fb9-8ac2-118ae63676a0/cert-manager-cainjector/0.log" Feb 19 11:08:39 crc kubenswrapper[4811]: I0219 11:08:39.336116 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-vl4s4_d9b74399-e5df-4e67-b52d-85b37e1a5125/cert-manager-webhook/0.log" Feb 19 11:08:52 crc kubenswrapper[4811]: I0219 11:08:52.694939 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-4lhw2_4f52fbce-dab0-4cda-b2c7-80a4caab9fff/nmstate-console-plugin/0.log" Feb 19 11:08:52 crc kubenswrapper[4811]: I0219 11:08:52.887066 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-xgsh7_d74c5c61-8e57-4702-8414-0e94a5080ce5/kube-rbac-proxy/0.log" Feb 19 11:08:52 crc kubenswrapper[4811]: I0219 11:08:52.899807 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-lcv4q_56faa147-2a2c-437e-a221-88fa87e89a56/nmstate-handler/0.log" Feb 19 11:08:53 crc kubenswrapper[4811]: I0219 11:08:53.054754 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-xgsh7_d74c5c61-8e57-4702-8414-0e94a5080ce5/nmstate-metrics/0.log" Feb 19 11:08:53 crc kubenswrapper[4811]: I0219 11:08:53.122910 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-s2r8d_cc55c956-c42d-49cc-a97b-cbb28863cdfd/nmstate-operator/0.log" Feb 19 11:08:53 crc kubenswrapper[4811]: I0219 11:08:53.305431 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-pdhvl_8635072a-afd3-426c-bed8-f0aa9ddf1371/nmstate-webhook/0.log" Feb 19 11:08:56 crc kubenswrapper[4811]: I0219 11:08:56.788890 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 11:08:56 crc kubenswrapper[4811]: I0219 11:08:56.789584 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 11:08:56 crc kubenswrapper[4811]: I0219 11:08:56.789657 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" Feb 19 11:08:56 crc kubenswrapper[4811]: I0219 11:08:56.790718 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d5f586fcacc6004c20dfa70ee7a04ad9b63b7e2534b6e867fb032ccc9df2a399"} pod="openshift-machine-config-operator/machine-config-daemon-v97zm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 11:08:56 crc kubenswrapper[4811]: I0219 11:08:56.790792 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" containerID="cri-o://d5f586fcacc6004c20dfa70ee7a04ad9b63b7e2534b6e867fb032ccc9df2a399" gracePeriod=600 Feb 19 11:08:56 crc kubenswrapper[4811]: I0219 11:08:56.980249 4811 generic.go:334] "Generic (PLEG): container finished" podID="e2e42c80-bece-4066-abad-05bc661d33f5" containerID="d5f586fcacc6004c20dfa70ee7a04ad9b63b7e2534b6e867fb032ccc9df2a399" exitCode=0 Feb 19 11:08:56 crc kubenswrapper[4811]: I0219 11:08:56.980320 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" event={"ID":"e2e42c80-bece-4066-abad-05bc661d33f5","Type":"ContainerDied","Data":"d5f586fcacc6004c20dfa70ee7a04ad9b63b7e2534b6e867fb032ccc9df2a399"} Feb 19 11:08:56 crc kubenswrapper[4811]: I0219 11:08:56.980719 4811 scope.go:117] "RemoveContainer" containerID="ecb77dd42fde5f6c3909c44a47ea1c0a181a1a0c7a82b7351079e66b3bbac5f5" Feb 19 11:08:57 crc kubenswrapper[4811]: I0219 11:08:57.992990 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" event={"ID":"e2e42c80-bece-4066-abad-05bc661d33f5","Type":"ContainerStarted","Data":"133c8091f936fab48e586b7bdce009236e4848e9a33aa81fe2541aee87912496"} Feb 19 11:09:22 crc kubenswrapper[4811]: I0219 11:09:22.080080 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-7sw8d_9eec6b61-1b8d-4615-8333-d93630c46ce2/kube-rbac-proxy/0.log" Feb 19 11:09:22 crc kubenswrapper[4811]: I0219 11:09:22.144557 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-7sw8d_9eec6b61-1b8d-4615-8333-d93630c46ce2/controller/0.log" Feb 19 11:09:22 crc kubenswrapper[4811]: I0219 11:09:22.296059 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ngn4k_84b9332e-fb14-4b7e-844e-d902809b4e4c/cp-frr-files/0.log" Feb 19 11:09:22 crc kubenswrapper[4811]: I0219 11:09:22.445408 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ngn4k_84b9332e-fb14-4b7e-844e-d902809b4e4c/cp-frr-files/0.log" Feb 19 11:09:22 crc kubenswrapper[4811]: I0219 11:09:22.466616 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ngn4k_84b9332e-fb14-4b7e-844e-d902809b4e4c/cp-reloader/0.log" Feb 19 11:09:22 crc kubenswrapper[4811]: I0219 11:09:22.466629 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ngn4k_84b9332e-fb14-4b7e-844e-d902809b4e4c/cp-metrics/0.log" Feb 19 11:09:22 crc kubenswrapper[4811]: I0219 11:09:22.555865 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ngn4k_84b9332e-fb14-4b7e-844e-d902809b4e4c/cp-reloader/0.log" Feb 19 11:09:22 crc kubenswrapper[4811]: I0219 11:09:22.745964 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ngn4k_84b9332e-fb14-4b7e-844e-d902809b4e4c/cp-reloader/0.log" Feb 19 11:09:22 crc kubenswrapper[4811]: I0219 11:09:22.774890 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ngn4k_84b9332e-fb14-4b7e-844e-d902809b4e4c/cp-frr-files/0.log" Feb 19 11:09:22 crc kubenswrapper[4811]: I0219 11:09:22.791358 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ngn4k_84b9332e-fb14-4b7e-844e-d902809b4e4c/cp-metrics/0.log" Feb 19 11:09:22 crc kubenswrapper[4811]: I0219 11:09:22.800338 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ngn4k_84b9332e-fb14-4b7e-844e-d902809b4e4c/cp-metrics/0.log" Feb 19 11:09:22 crc kubenswrapper[4811]: I0219 11:09:22.975053 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ngn4k_84b9332e-fb14-4b7e-844e-d902809b4e4c/cp-reloader/0.log" Feb 19 11:09:22 crc kubenswrapper[4811]: I0219 11:09:22.976348 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ngn4k_84b9332e-fb14-4b7e-844e-d902809b4e4c/cp-frr-files/0.log" Feb 19 11:09:23 crc kubenswrapper[4811]: I0219 11:09:23.002361 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ngn4k_84b9332e-fb14-4b7e-844e-d902809b4e4c/cp-metrics/0.log" Feb 19 11:09:23 crc kubenswrapper[4811]: I0219 11:09:23.036052 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ngn4k_84b9332e-fb14-4b7e-844e-d902809b4e4c/controller/0.log" Feb 19 11:09:23 crc kubenswrapper[4811]: I0219 11:09:23.222870 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ngn4k_84b9332e-fb14-4b7e-844e-d902809b4e4c/kube-rbac-proxy/0.log" Feb 19 11:09:23 crc kubenswrapper[4811]: I0219 11:09:23.238786 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ngn4k_84b9332e-fb14-4b7e-844e-d902809b4e4c/frr-metrics/0.log" Feb 19 11:09:23 crc kubenswrapper[4811]: I0219 11:09:23.281593 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ngn4k_84b9332e-fb14-4b7e-844e-d902809b4e4c/kube-rbac-proxy-frr/0.log" Feb 19 11:09:23 crc kubenswrapper[4811]: I0219 11:09:23.542528 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ngn4k_84b9332e-fb14-4b7e-844e-d902809b4e4c/reloader/0.log" Feb 19 11:09:23 crc kubenswrapper[4811]: I0219 11:09:23.556834 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-fx4zj_5742d3da-f213-4a03-a18d-55cbe88d82b5/frr-k8s-webhook-server/0.log" Feb 19 11:09:23 crc kubenswrapper[4811]: I0219 11:09:23.849274 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6b977f8b79-xtsfv_b4c117aa-5a81-4f2a-88d0-de69a1da3c84/manager/0.log" Feb 19 11:09:23 crc kubenswrapper[4811]: I0219 11:09:23.992036 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-c47dd88fc-mx6n7_3647a1c7-33fe-440f-b648-ace58f1deb0d/webhook-server/0.log" Feb 19 11:09:24 crc kubenswrapper[4811]: I0219 11:09:24.045073 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jswp7_4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461/kube-rbac-proxy/0.log" Feb 19 11:09:24 crc kubenswrapper[4811]: I0219 11:09:24.638086 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jswp7_4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461/speaker/0.log" Feb 19 11:09:24 crc kubenswrapper[4811]: I0219 11:09:24.768516 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ngn4k_84b9332e-fb14-4b7e-844e-d902809b4e4c/frr/0.log" Feb 19 11:09:37 crc kubenswrapper[4811]: I0219 11:09:37.971174 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv_2dfded96-366b-42d7-b04a-bfa8e4a203a8/util/0.log" Feb 19 11:09:38 crc kubenswrapper[4811]: I0219 11:09:38.156501 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv_2dfded96-366b-42d7-b04a-bfa8e4a203a8/pull/0.log" Feb 19 11:09:38 crc kubenswrapper[4811]: I0219 11:09:38.178532 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv_2dfded96-366b-42d7-b04a-bfa8e4a203a8/util/0.log" Feb 19 11:09:38 crc kubenswrapper[4811]: I0219 11:09:38.211812 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv_2dfded96-366b-42d7-b04a-bfa8e4a203a8/pull/0.log" Feb 19 11:09:38 crc kubenswrapper[4811]: I0219 11:09:38.357833 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv_2dfded96-366b-42d7-b04a-bfa8e4a203a8/util/0.log" Feb 19 11:09:38 crc kubenswrapper[4811]: I0219 11:09:38.373356 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv_2dfded96-366b-42d7-b04a-bfa8e4a203a8/pull/0.log" Feb 19 11:09:38 crc kubenswrapper[4811]: I0219 11:09:38.429970 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv_2dfded96-366b-42d7-b04a-bfa8e4a203a8/extract/0.log" Feb 19 11:09:38 crc kubenswrapper[4811]: I0219 11:09:38.566487 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qftq4_8b733b4b-8597-484d-a6a6-b5635e80b483/extract-utilities/0.log" Feb 19 11:09:38 crc kubenswrapper[4811]: I0219 11:09:38.708555 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qftq4_8b733b4b-8597-484d-a6a6-b5635e80b483/extract-utilities/0.log" Feb 19 11:09:38 crc kubenswrapper[4811]: I0219 11:09:38.727586 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qftq4_8b733b4b-8597-484d-a6a6-b5635e80b483/extract-content/0.log" Feb 19 11:09:38 crc kubenswrapper[4811]: I0219 11:09:38.750731 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qftq4_8b733b4b-8597-484d-a6a6-b5635e80b483/extract-content/0.log" Feb 19 11:09:38 crc kubenswrapper[4811]: I0219 11:09:38.955194 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qftq4_8b733b4b-8597-484d-a6a6-b5635e80b483/extract-content/0.log" Feb 19 11:09:38 crc kubenswrapper[4811]: I0219 11:09:38.985723 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qftq4_8b733b4b-8597-484d-a6a6-b5635e80b483/extract-utilities/0.log" Feb 19 11:09:39 crc kubenswrapper[4811]: I0219 11:09:39.230205 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-grnvj_8983e037-aaa4-4bc6-82a1-8f312808a88a/extract-utilities/0.log" Feb 19 11:09:39 crc kubenswrapper[4811]: I0219 11:09:39.414959 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-grnvj_8983e037-aaa4-4bc6-82a1-8f312808a88a/extract-content/0.log" Feb 19 11:09:39 crc kubenswrapper[4811]: I0219 11:09:39.457126 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-grnvj_8983e037-aaa4-4bc6-82a1-8f312808a88a/extract-utilities/0.log" Feb 19 11:09:39 crc kubenswrapper[4811]: I0219 11:09:39.484491 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qftq4_8b733b4b-8597-484d-a6a6-b5635e80b483/registry-server/0.log" Feb 19 11:09:39 crc kubenswrapper[4811]: I0219 11:09:39.518504 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-grnvj_8983e037-aaa4-4bc6-82a1-8f312808a88a/extract-content/0.log" Feb 19 11:09:39 crc kubenswrapper[4811]: I0219 11:09:39.639868 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-grnvj_8983e037-aaa4-4bc6-82a1-8f312808a88a/extract-utilities/0.log" Feb 19 11:09:39 crc kubenswrapper[4811]: I0219 11:09:39.686006 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-grnvj_8983e037-aaa4-4bc6-82a1-8f312808a88a/extract-content/0.log" Feb 19 11:09:39 crc kubenswrapper[4811]: I0219 11:09:39.903179 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7_ee3ec824-bdc8-4428-8c43-82503b8f2760/util/0.log" Feb 19 11:09:40 crc kubenswrapper[4811]: I0219 11:09:40.103498 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7_ee3ec824-bdc8-4428-8c43-82503b8f2760/pull/0.log" Feb 19 11:09:40 crc kubenswrapper[4811]: I0219 11:09:40.126307 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7_ee3ec824-bdc8-4428-8c43-82503b8f2760/util/0.log" Feb 19 11:09:40 crc kubenswrapper[4811]: I0219 11:09:40.185277 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-grnvj_8983e037-aaa4-4bc6-82a1-8f312808a88a/registry-server/0.log" Feb 19 11:09:40 crc kubenswrapper[4811]: I0219 11:09:40.203808 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7_ee3ec824-bdc8-4428-8c43-82503b8f2760/pull/0.log" Feb 19 11:09:40 crc kubenswrapper[4811]: I0219 11:09:40.367261 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7_ee3ec824-bdc8-4428-8c43-82503b8f2760/util/0.log" Feb 19 11:09:40 crc kubenswrapper[4811]: I0219 11:09:40.384370 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7_ee3ec824-bdc8-4428-8c43-82503b8f2760/extract/0.log" Feb 19 11:09:40 crc kubenswrapper[4811]: I0219 11:09:40.394520 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7_ee3ec824-bdc8-4428-8c43-82503b8f2760/pull/0.log" Feb 19 11:09:40 crc kubenswrapper[4811]: I0219 11:09:40.569855 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-7lcg9_5fd28c56-84ca-4b57-af19-9c11dc95b397/marketplace-operator/0.log" Feb 19 11:09:40 crc kubenswrapper[4811]: I0219 11:09:40.873010 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-82lbs_587cc9ad-4897-4e3e-b6ee-fdbc9ee4532e/extract-utilities/0.log" Feb 19 11:09:41 crc kubenswrapper[4811]: I0219 11:09:41.036054 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-82lbs_587cc9ad-4897-4e3e-b6ee-fdbc9ee4532e/extract-utilities/0.log" Feb 19 11:09:41 crc kubenswrapper[4811]: I0219 11:09:41.064300 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-82lbs_587cc9ad-4897-4e3e-b6ee-fdbc9ee4532e/extract-content/0.log" Feb 19 11:09:41 crc kubenswrapper[4811]: I0219 11:09:41.093977 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-82lbs_587cc9ad-4897-4e3e-b6ee-fdbc9ee4532e/extract-content/0.log" Feb 19 11:09:41 crc kubenswrapper[4811]: I0219 11:09:41.235636 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-82lbs_587cc9ad-4897-4e3e-b6ee-fdbc9ee4532e/extract-utilities/0.log" Feb 19 11:09:41 crc kubenswrapper[4811]: I0219 11:09:41.236758 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-82lbs_587cc9ad-4897-4e3e-b6ee-fdbc9ee4532e/extract-content/0.log" Feb 19 11:09:41 crc kubenswrapper[4811]: I0219 11:09:41.387574 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-82lbs_587cc9ad-4897-4e3e-b6ee-fdbc9ee4532e/registry-server/0.log" Feb 19 11:09:41 crc kubenswrapper[4811]: I0219 11:09:41.482437 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7r9tk_0ea7e423-8a9e-4b21-a701-0682236bec67/extract-utilities/0.log" Feb 19 11:09:41 crc kubenswrapper[4811]: I0219 11:09:41.656759 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7r9tk_0ea7e423-8a9e-4b21-a701-0682236bec67/extract-content/0.log" Feb 19 11:09:41 crc kubenswrapper[4811]: I0219 11:09:41.684665 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7r9tk_0ea7e423-8a9e-4b21-a701-0682236bec67/extract-utilities/0.log" Feb 19 11:09:41 crc kubenswrapper[4811]: I0219 11:09:41.703822 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7r9tk_0ea7e423-8a9e-4b21-a701-0682236bec67/extract-content/0.log" Feb 19 11:09:41 crc kubenswrapper[4811]: I0219 11:09:41.868984 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7r9tk_0ea7e423-8a9e-4b21-a701-0682236bec67/extract-utilities/0.log" Feb 19 11:09:41 crc kubenswrapper[4811]: I0219 11:09:41.938473 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7r9tk_0ea7e423-8a9e-4b21-a701-0682236bec67/extract-content/0.log" Feb 19 11:09:42 crc kubenswrapper[4811]: I0219 11:09:42.326504 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7r9tk_0ea7e423-8a9e-4b21-a701-0682236bec67/registry-server/0.log" Feb 19 11:11:26 crc kubenswrapper[4811]: I0219 11:11:26.809436 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 11:11:26 crc kubenswrapper[4811]: I0219 11:11:26.810411 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 11:11:35 crc kubenswrapper[4811]: I0219 11:11:35.368255 4811 generic.go:334] "Generic (PLEG): container finished" podID="723ec19b-8f91-4289-a85c-c868ee12f4fb" containerID="b20084dc7da20299b3d07916db53b8ba1082be73720984cba70b7e8045a0955b" exitCode=0 Feb 19 11:11:35 crc kubenswrapper[4811]: I0219 11:11:35.368355 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jw49p/must-gather-w84fj" event={"ID":"723ec19b-8f91-4289-a85c-c868ee12f4fb","Type":"ContainerDied","Data":"b20084dc7da20299b3d07916db53b8ba1082be73720984cba70b7e8045a0955b"} Feb 19 11:11:35 crc kubenswrapper[4811]: I0219 11:11:35.369686 4811 scope.go:117] "RemoveContainer" containerID="b20084dc7da20299b3d07916db53b8ba1082be73720984cba70b7e8045a0955b" Feb 19 11:11:36 crc kubenswrapper[4811]: I0219 11:11:36.324658 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jw49p_must-gather-w84fj_723ec19b-8f91-4289-a85c-c868ee12f4fb/gather/0.log" Feb 19 11:11:44 crc kubenswrapper[4811]: I0219 11:11:44.428982 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jw49p/must-gather-w84fj"] Feb 19 11:11:44 crc kubenswrapper[4811]: I0219 11:11:44.429917 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-jw49p/must-gather-w84fj" podUID="723ec19b-8f91-4289-a85c-c868ee12f4fb" containerName="copy" containerID="cri-o://c6bf93c3325b4be3373076140162da821d6dd7cf608e4e1a614b921773dd729e" gracePeriod=2 Feb 19 11:11:44 crc kubenswrapper[4811]: I0219 11:11:44.440114 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jw49p/must-gather-w84fj"] Feb 19 11:11:45 crc kubenswrapper[4811]: I0219 11:11:45.224856 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jw49p_must-gather-w84fj_723ec19b-8f91-4289-a85c-c868ee12f4fb/copy/0.log" Feb 19 11:11:45 crc kubenswrapper[4811]: I0219 11:11:45.225817 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jw49p/must-gather-w84fj" Feb 19 11:11:45 crc kubenswrapper[4811]: I0219 11:11:45.380581 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/723ec19b-8f91-4289-a85c-c868ee12f4fb-must-gather-output\") pod \"723ec19b-8f91-4289-a85c-c868ee12f4fb\" (UID: \"723ec19b-8f91-4289-a85c-c868ee12f4fb\") " Feb 19 11:11:45 crc kubenswrapper[4811]: I0219 11:11:45.380707 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cxw5\" (UniqueName: \"kubernetes.io/projected/723ec19b-8f91-4289-a85c-c868ee12f4fb-kube-api-access-9cxw5\") pod \"723ec19b-8f91-4289-a85c-c868ee12f4fb\" (UID: \"723ec19b-8f91-4289-a85c-c868ee12f4fb\") " Feb 19 11:11:45 crc kubenswrapper[4811]: I0219 11:11:45.405310 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/723ec19b-8f91-4289-a85c-c868ee12f4fb-kube-api-access-9cxw5" (OuterVolumeSpecName: "kube-api-access-9cxw5") pod "723ec19b-8f91-4289-a85c-c868ee12f4fb" (UID: "723ec19b-8f91-4289-a85c-c868ee12f4fb"). InnerVolumeSpecName "kube-api-access-9cxw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:11:45 crc kubenswrapper[4811]: I0219 11:11:45.480806 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jw49p_must-gather-w84fj_723ec19b-8f91-4289-a85c-c868ee12f4fb/copy/0.log" Feb 19 11:11:45 crc kubenswrapper[4811]: I0219 11:11:45.481585 4811 generic.go:334] "Generic (PLEG): container finished" podID="723ec19b-8f91-4289-a85c-c868ee12f4fb" containerID="c6bf93c3325b4be3373076140162da821d6dd7cf608e4e1a614b921773dd729e" exitCode=143 Feb 19 11:11:45 crc kubenswrapper[4811]: I0219 11:11:45.481655 4811 scope.go:117] "RemoveContainer" containerID="c6bf93c3325b4be3373076140162da821d6dd7cf608e4e1a614b921773dd729e" Feb 19 11:11:45 crc kubenswrapper[4811]: I0219 11:11:45.481874 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jw49p/must-gather-w84fj" Feb 19 11:11:45 crc kubenswrapper[4811]: I0219 11:11:45.482766 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cxw5\" (UniqueName: \"kubernetes.io/projected/723ec19b-8f91-4289-a85c-c868ee12f4fb-kube-api-access-9cxw5\") on node \"crc\" DevicePath \"\"" Feb 19 11:11:45 crc kubenswrapper[4811]: I0219 11:11:45.526009 4811 scope.go:117] "RemoveContainer" containerID="b20084dc7da20299b3d07916db53b8ba1082be73720984cba70b7e8045a0955b" Feb 19 11:11:45 crc kubenswrapper[4811]: I0219 11:11:45.667596 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/723ec19b-8f91-4289-a85c-c868ee12f4fb-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "723ec19b-8f91-4289-a85c-c868ee12f4fb" (UID: "723ec19b-8f91-4289-a85c-c868ee12f4fb"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:11:45 crc kubenswrapper[4811]: I0219 11:11:45.689509 4811 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/723ec19b-8f91-4289-a85c-c868ee12f4fb-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 19 11:11:46 crc kubenswrapper[4811]: I0219 11:11:46.027031 4811 scope.go:117] "RemoveContainer" containerID="c6bf93c3325b4be3373076140162da821d6dd7cf608e4e1a614b921773dd729e" Feb 19 11:11:46 crc kubenswrapper[4811]: E0219 11:11:46.027832 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6bf93c3325b4be3373076140162da821d6dd7cf608e4e1a614b921773dd729e\": container with ID starting with c6bf93c3325b4be3373076140162da821d6dd7cf608e4e1a614b921773dd729e not found: ID does not exist" containerID="c6bf93c3325b4be3373076140162da821d6dd7cf608e4e1a614b921773dd729e" Feb 19 11:11:46 crc kubenswrapper[4811]: I0219 11:11:46.027903 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6bf93c3325b4be3373076140162da821d6dd7cf608e4e1a614b921773dd729e"} err="failed to get container status \"c6bf93c3325b4be3373076140162da821d6dd7cf608e4e1a614b921773dd729e\": rpc error: code = NotFound desc = could not find container \"c6bf93c3325b4be3373076140162da821d6dd7cf608e4e1a614b921773dd729e\": container with ID starting with c6bf93c3325b4be3373076140162da821d6dd7cf608e4e1a614b921773dd729e not found: ID does not exist" Feb 19 11:11:46 crc kubenswrapper[4811]: I0219 11:11:46.027946 4811 scope.go:117] "RemoveContainer" containerID="b20084dc7da20299b3d07916db53b8ba1082be73720984cba70b7e8045a0955b" Feb 19 11:11:46 crc kubenswrapper[4811]: E0219 11:11:46.028600 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b20084dc7da20299b3d07916db53b8ba1082be73720984cba70b7e8045a0955b\": container with ID starting with b20084dc7da20299b3d07916db53b8ba1082be73720984cba70b7e8045a0955b not found: ID does not exist" containerID="b20084dc7da20299b3d07916db53b8ba1082be73720984cba70b7e8045a0955b" Feb 19 11:11:46 crc kubenswrapper[4811]: I0219 11:11:46.028667 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b20084dc7da20299b3d07916db53b8ba1082be73720984cba70b7e8045a0955b"} err="failed to get container status \"b20084dc7da20299b3d07916db53b8ba1082be73720984cba70b7e8045a0955b\": rpc error: code = NotFound desc = could not find container \"b20084dc7da20299b3d07916db53b8ba1082be73720984cba70b7e8045a0955b\": container with ID starting with b20084dc7da20299b3d07916db53b8ba1082be73720984cba70b7e8045a0955b not found: ID does not exist" Feb 19 11:11:46 crc kubenswrapper[4811]: I0219 11:11:46.595275 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="723ec19b-8f91-4289-a85c-c868ee12f4fb" path="/var/lib/kubelet/pods/723ec19b-8f91-4289-a85c-c868ee12f4fb/volumes" Feb 19 11:11:56 crc kubenswrapper[4811]: I0219 11:11:56.788903 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 11:11:56 crc kubenswrapper[4811]: I0219 11:11:56.789571 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 11:12:09 crc kubenswrapper[4811]: I0219 11:12:09.576095 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qvgsr"] Feb 19 11:12:09 crc kubenswrapper[4811]: E0219 11:12:09.577312 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef06d1a3-6404-4a0b-9522-2a3d7fe26490" containerName="extract-utilities" Feb 19 11:12:09 crc kubenswrapper[4811]: I0219 11:12:09.577337 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef06d1a3-6404-4a0b-9522-2a3d7fe26490" containerName="extract-utilities" Feb 19 11:12:09 crc kubenswrapper[4811]: E0219 11:12:09.577357 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef06d1a3-6404-4a0b-9522-2a3d7fe26490" containerName="extract-content" Feb 19 11:12:09 crc kubenswrapper[4811]: I0219 11:12:09.577366 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef06d1a3-6404-4a0b-9522-2a3d7fe26490" containerName="extract-content" Feb 19 11:12:09 crc kubenswrapper[4811]: E0219 11:12:09.577396 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="723ec19b-8f91-4289-a85c-c868ee12f4fb" containerName="gather" Feb 19 11:12:09 crc kubenswrapper[4811]: I0219 11:12:09.577408 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="723ec19b-8f91-4289-a85c-c868ee12f4fb" containerName="gather" Feb 19 11:12:09 crc kubenswrapper[4811]: E0219 11:12:09.577434 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="723ec19b-8f91-4289-a85c-c868ee12f4fb" containerName="copy" Feb 19 11:12:09 crc kubenswrapper[4811]: I0219 11:12:09.577441 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="723ec19b-8f91-4289-a85c-c868ee12f4fb" containerName="copy" Feb 19 11:12:09 crc kubenswrapper[4811]: E0219 11:12:09.577456 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1965e95a-5442-4307-928c-30008502c0a3" containerName="container-00" Feb 19 11:12:09 crc kubenswrapper[4811]: I0219 11:12:09.577465 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="1965e95a-5442-4307-928c-30008502c0a3" containerName="container-00" Feb 19 11:12:09 crc kubenswrapper[4811]: E0219 11:12:09.577514 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef06d1a3-6404-4a0b-9522-2a3d7fe26490" containerName="registry-server" Feb 19 11:12:09 crc kubenswrapper[4811]: I0219 11:12:09.577525 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef06d1a3-6404-4a0b-9522-2a3d7fe26490" containerName="registry-server" Feb 19 11:12:09 crc kubenswrapper[4811]: I0219 11:12:09.577780 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="1965e95a-5442-4307-928c-30008502c0a3" containerName="container-00" Feb 19 11:12:09 crc kubenswrapper[4811]: I0219 11:12:09.577793 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="723ec19b-8f91-4289-a85c-c868ee12f4fb" containerName="gather" Feb 19 11:12:09 crc kubenswrapper[4811]: I0219 11:12:09.577817 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="723ec19b-8f91-4289-a85c-c868ee12f4fb" containerName="copy" Feb 19 11:12:09 crc kubenswrapper[4811]: I0219 11:12:09.577828 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef06d1a3-6404-4a0b-9522-2a3d7fe26490" containerName="registry-server" Feb 19 11:12:09 crc kubenswrapper[4811]: I0219 11:12:09.579418 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qvgsr" Feb 19 11:12:09 crc kubenswrapper[4811]: I0219 11:12:09.615844 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qvgsr"] Feb 19 11:12:09 crc kubenswrapper[4811]: I0219 11:12:09.756814 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0628070-a973-41bb-b1d7-737a572ac19d-catalog-content\") pod \"redhat-marketplace-qvgsr\" (UID: \"f0628070-a973-41bb-b1d7-737a572ac19d\") " pod="openshift-marketplace/redhat-marketplace-qvgsr" Feb 19 11:12:09 crc kubenswrapper[4811]: I0219 11:12:09.757022 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbvv6\" (UniqueName: \"kubernetes.io/projected/f0628070-a973-41bb-b1d7-737a572ac19d-kube-api-access-bbvv6\") pod \"redhat-marketplace-qvgsr\" (UID: \"f0628070-a973-41bb-b1d7-737a572ac19d\") " pod="openshift-marketplace/redhat-marketplace-qvgsr" Feb 19 11:12:09 crc kubenswrapper[4811]: I0219 11:12:09.757117 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0628070-a973-41bb-b1d7-737a572ac19d-utilities\") pod \"redhat-marketplace-qvgsr\" (UID: \"f0628070-a973-41bb-b1d7-737a572ac19d\") " pod="openshift-marketplace/redhat-marketplace-qvgsr" Feb 19 11:12:09 crc kubenswrapper[4811]: I0219 11:12:09.859068 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbvv6\" (UniqueName: \"kubernetes.io/projected/f0628070-a973-41bb-b1d7-737a572ac19d-kube-api-access-bbvv6\") pod \"redhat-marketplace-qvgsr\" (UID: \"f0628070-a973-41bb-b1d7-737a572ac19d\") " pod="openshift-marketplace/redhat-marketplace-qvgsr" Feb 19 11:12:09 crc kubenswrapper[4811]: I0219 11:12:09.859176 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0628070-a973-41bb-b1d7-737a572ac19d-utilities\") pod \"redhat-marketplace-qvgsr\" (UID: \"f0628070-a973-41bb-b1d7-737a572ac19d\") " pod="openshift-marketplace/redhat-marketplace-qvgsr" Feb 19 11:12:09 crc kubenswrapper[4811]: I0219 11:12:09.859234 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0628070-a973-41bb-b1d7-737a572ac19d-catalog-content\") pod \"redhat-marketplace-qvgsr\" (UID: \"f0628070-a973-41bb-b1d7-737a572ac19d\") " pod="openshift-marketplace/redhat-marketplace-qvgsr" Feb 19 11:12:09 crc kubenswrapper[4811]: I0219 11:12:09.859903 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0628070-a973-41bb-b1d7-737a572ac19d-catalog-content\") pod \"redhat-marketplace-qvgsr\" (UID: \"f0628070-a973-41bb-b1d7-737a572ac19d\") " pod="openshift-marketplace/redhat-marketplace-qvgsr" Feb 19 11:12:09 crc kubenswrapper[4811]: I0219 11:12:09.860130 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0628070-a973-41bb-b1d7-737a572ac19d-utilities\") pod \"redhat-marketplace-qvgsr\" (UID: \"f0628070-a973-41bb-b1d7-737a572ac19d\") " pod="openshift-marketplace/redhat-marketplace-qvgsr" Feb 19 11:12:09 crc kubenswrapper[4811]: I0219 11:12:09.884599 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbvv6\" (UniqueName: \"kubernetes.io/projected/f0628070-a973-41bb-b1d7-737a572ac19d-kube-api-access-bbvv6\") pod \"redhat-marketplace-qvgsr\" (UID: \"f0628070-a973-41bb-b1d7-737a572ac19d\") " pod="openshift-marketplace/redhat-marketplace-qvgsr" Feb 19 11:12:09 crc kubenswrapper[4811]: I0219 11:12:09.928764 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qvgsr" Feb 19 11:12:10 crc kubenswrapper[4811]: I0219 11:12:10.653759 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qvgsr"] Feb 19 11:12:10 crc kubenswrapper[4811]: I0219 11:12:10.740215 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvgsr" event={"ID":"f0628070-a973-41bb-b1d7-737a572ac19d","Type":"ContainerStarted","Data":"c575b65a6aaf67478faed6667a1712848bffc0f365bfbd317c14ee649548e66a"} Feb 19 11:12:11 crc kubenswrapper[4811]: I0219 11:12:11.755581 4811 generic.go:334] "Generic (PLEG): container finished" podID="f0628070-a973-41bb-b1d7-737a572ac19d" containerID="c9169adbdb165ac0658c0e3d38f1f904009cc4d94ff898617e61102fcafdb21f" exitCode=0 Feb 19 11:12:11 crc kubenswrapper[4811]: I0219 11:12:11.755706 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvgsr" event={"ID":"f0628070-a973-41bb-b1d7-737a572ac19d","Type":"ContainerDied","Data":"c9169adbdb165ac0658c0e3d38f1f904009cc4d94ff898617e61102fcafdb21f"} Feb 19 11:12:11 crc kubenswrapper[4811]: I0219 11:12:11.758953 4811 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 11:12:12 crc kubenswrapper[4811]: I0219 11:12:12.769672 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvgsr" event={"ID":"f0628070-a973-41bb-b1d7-737a572ac19d","Type":"ContainerStarted","Data":"9667911d8fd17ed8f10d29c1b7f10ff4bd8ef5dc8cb3dcd1c1a38bc8cc9153fb"} Feb 19 11:12:13 crc kubenswrapper[4811]: I0219 11:12:13.781853 4811 generic.go:334] "Generic (PLEG): container finished" podID="f0628070-a973-41bb-b1d7-737a572ac19d" containerID="9667911d8fd17ed8f10d29c1b7f10ff4bd8ef5dc8cb3dcd1c1a38bc8cc9153fb" exitCode=0 Feb 19 11:12:13 crc kubenswrapper[4811]: I0219 11:12:13.781954 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvgsr" event={"ID":"f0628070-a973-41bb-b1d7-737a572ac19d","Type":"ContainerDied","Data":"9667911d8fd17ed8f10d29c1b7f10ff4bd8ef5dc8cb3dcd1c1a38bc8cc9153fb"} Feb 19 11:12:14 crc kubenswrapper[4811]: I0219 11:12:14.793291 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvgsr" event={"ID":"f0628070-a973-41bb-b1d7-737a572ac19d","Type":"ContainerStarted","Data":"b69548087e2128302cac4abb81f333727005c5e581c5951817b77af3c38de7f0"} Feb 19 11:12:18 crc kubenswrapper[4811]: I0219 11:12:18.377960 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qvgsr" podStartSLOduration=6.916717369 podStartE2EDuration="9.3779111s" podCreationTimestamp="2026-02-19 11:12:09 +0000 UTC" firstStartedPulling="2026-02-19 11:12:11.75857946 +0000 UTC m=+3820.285044146" lastFinishedPulling="2026-02-19 11:12:14.219773191 +0000 UTC m=+3822.746237877" observedRunningTime="2026-02-19 11:12:14.82648272 +0000 UTC m=+3823.352947416" watchObservedRunningTime="2026-02-19 11:12:18.3779111 +0000 UTC m=+3826.904375786" Feb 19 11:12:18 crc kubenswrapper[4811]: I0219 11:12:18.385628 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x8bvg"] Feb 19 11:12:18 crc kubenswrapper[4811]: I0219 11:12:18.388287 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x8bvg" Feb 19 11:12:18 crc kubenswrapper[4811]: I0219 11:12:18.395990 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x8bvg"] Feb 19 11:12:18 crc kubenswrapper[4811]: I0219 11:12:18.451756 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/088905b4-cbed-4b42-ae96-7f559999674e-catalog-content\") pod \"community-operators-x8bvg\" (UID: \"088905b4-cbed-4b42-ae96-7f559999674e\") " pod="openshift-marketplace/community-operators-x8bvg" Feb 19 11:12:18 crc kubenswrapper[4811]: I0219 11:12:18.451889 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/088905b4-cbed-4b42-ae96-7f559999674e-utilities\") pod \"community-operators-x8bvg\" (UID: \"088905b4-cbed-4b42-ae96-7f559999674e\") " pod="openshift-marketplace/community-operators-x8bvg" Feb 19 11:12:18 crc kubenswrapper[4811]: I0219 11:12:18.451977 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmms7\" (UniqueName: \"kubernetes.io/projected/088905b4-cbed-4b42-ae96-7f559999674e-kube-api-access-mmms7\") pod \"community-operators-x8bvg\" (UID: \"088905b4-cbed-4b42-ae96-7f559999674e\") " pod="openshift-marketplace/community-operators-x8bvg" Feb 19 11:12:18 crc kubenswrapper[4811]: I0219 11:12:18.554801 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/088905b4-cbed-4b42-ae96-7f559999674e-catalog-content\") pod \"community-operators-x8bvg\" (UID: \"088905b4-cbed-4b42-ae96-7f559999674e\") " pod="openshift-marketplace/community-operators-x8bvg" Feb 19 11:12:18 crc kubenswrapper[4811]: I0219 11:12:18.554924 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/088905b4-cbed-4b42-ae96-7f559999674e-utilities\") pod \"community-operators-x8bvg\" (UID: \"088905b4-cbed-4b42-ae96-7f559999674e\") " pod="openshift-marketplace/community-operators-x8bvg" Feb 19 11:12:18 crc kubenswrapper[4811]: I0219 11:12:18.555054 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmms7\" (UniqueName: \"kubernetes.io/projected/088905b4-cbed-4b42-ae96-7f559999674e-kube-api-access-mmms7\") pod \"community-operators-x8bvg\" (UID: \"088905b4-cbed-4b42-ae96-7f559999674e\") " pod="openshift-marketplace/community-operators-x8bvg" Feb 19 11:12:18 crc kubenswrapper[4811]: I0219 11:12:18.555719 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/088905b4-cbed-4b42-ae96-7f559999674e-utilities\") pod \"community-operators-x8bvg\" (UID: \"088905b4-cbed-4b42-ae96-7f559999674e\") " pod="openshift-marketplace/community-operators-x8bvg" Feb 19 11:12:18 crc kubenswrapper[4811]: I0219 11:12:18.556057 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/088905b4-cbed-4b42-ae96-7f559999674e-catalog-content\") pod \"community-operators-x8bvg\" (UID: \"088905b4-cbed-4b42-ae96-7f559999674e\") " pod="openshift-marketplace/community-operators-x8bvg" Feb 19 11:12:18 crc kubenswrapper[4811]: I0219 11:12:18.579535 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmms7\" (UniqueName: \"kubernetes.io/projected/088905b4-cbed-4b42-ae96-7f559999674e-kube-api-access-mmms7\") pod \"community-operators-x8bvg\" (UID: \"088905b4-cbed-4b42-ae96-7f559999674e\") " pod="openshift-marketplace/community-operators-x8bvg" Feb 19 11:12:18 crc kubenswrapper[4811]: I0219 11:12:18.716434 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x8bvg" Feb 19 11:12:19 crc kubenswrapper[4811]: I0219 11:12:19.368212 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x8bvg"] Feb 19 11:12:19 crc kubenswrapper[4811]: I0219 11:12:19.860110 4811 generic.go:334] "Generic (PLEG): container finished" podID="088905b4-cbed-4b42-ae96-7f559999674e" containerID="e18c93cce3ca65043439b6a12d254b75859c4388ad5059a69cdf2807b0fac7a1" exitCode=0 Feb 19 11:12:19 crc kubenswrapper[4811]: I0219 11:12:19.860180 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x8bvg" event={"ID":"088905b4-cbed-4b42-ae96-7f559999674e","Type":"ContainerDied","Data":"e18c93cce3ca65043439b6a12d254b75859c4388ad5059a69cdf2807b0fac7a1"} Feb 19 11:12:19 crc kubenswrapper[4811]: I0219 11:12:19.860451 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x8bvg" event={"ID":"088905b4-cbed-4b42-ae96-7f559999674e","Type":"ContainerStarted","Data":"dbb0f19eb0a632a7b9346f1c2ff5b55d0b820ee5e436f610b3336a2737381125"} Feb 19 11:12:19 crc kubenswrapper[4811]: I0219 11:12:19.929803 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qvgsr" Feb 19 11:12:19 crc kubenswrapper[4811]: I0219 11:12:19.929876 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qvgsr" Feb 19 11:12:19 crc kubenswrapper[4811]: I0219 11:12:19.985771 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qvgsr" Feb 19 11:12:20 crc kubenswrapper[4811]: I0219 11:12:20.877095 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x8bvg" event={"ID":"088905b4-cbed-4b42-ae96-7f559999674e","Type":"ContainerStarted","Data":"03fd663a4ebde25de73d121c4290d6c081db188e1cdd40d202751c69b04b5bce"} Feb 19 11:12:20 crc kubenswrapper[4811]: I0219 11:12:20.934417 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qvgsr" Feb 19 11:12:21 crc kubenswrapper[4811]: I0219 11:12:21.890420 4811 generic.go:334] "Generic (PLEG): container finished" podID="088905b4-cbed-4b42-ae96-7f559999674e" containerID="03fd663a4ebde25de73d121c4290d6c081db188e1cdd40d202751c69b04b5bce" exitCode=0 Feb 19 11:12:21 crc kubenswrapper[4811]: I0219 11:12:21.890581 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x8bvg" event={"ID":"088905b4-cbed-4b42-ae96-7f559999674e","Type":"ContainerDied","Data":"03fd663a4ebde25de73d121c4290d6c081db188e1cdd40d202751c69b04b5bce"} Feb 19 11:12:22 crc kubenswrapper[4811]: I0219 11:12:22.357395 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qvgsr"] Feb 19 11:12:22 crc kubenswrapper[4811]: I0219 11:12:22.899082 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qvgsr" podUID="f0628070-a973-41bb-b1d7-737a572ac19d" containerName="registry-server" containerID="cri-o://b69548087e2128302cac4abb81f333727005c5e581c5951817b77af3c38de7f0" gracePeriod=2 Feb 19 11:12:23 crc kubenswrapper[4811]: I0219 11:12:23.385815 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qvgsr" Feb 19 11:12:23 crc kubenswrapper[4811]: I0219 11:12:23.489620 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0628070-a973-41bb-b1d7-737a572ac19d-catalog-content\") pod \"f0628070-a973-41bb-b1d7-737a572ac19d\" (UID: \"f0628070-a973-41bb-b1d7-737a572ac19d\") " Feb 19 11:12:23 crc kubenswrapper[4811]: I0219 11:12:23.489840 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbvv6\" (UniqueName: \"kubernetes.io/projected/f0628070-a973-41bb-b1d7-737a572ac19d-kube-api-access-bbvv6\") pod \"f0628070-a973-41bb-b1d7-737a572ac19d\" (UID: \"f0628070-a973-41bb-b1d7-737a572ac19d\") " Feb 19 11:12:23 crc kubenswrapper[4811]: I0219 11:12:23.489971 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0628070-a973-41bb-b1d7-737a572ac19d-utilities\") pod \"f0628070-a973-41bb-b1d7-737a572ac19d\" (UID: \"f0628070-a973-41bb-b1d7-737a572ac19d\") " Feb 19 11:12:23 crc kubenswrapper[4811]: I0219 11:12:23.491022 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0628070-a973-41bb-b1d7-737a572ac19d-utilities" (OuterVolumeSpecName: "utilities") pod "f0628070-a973-41bb-b1d7-737a572ac19d" (UID: "f0628070-a973-41bb-b1d7-737a572ac19d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:12:23 crc kubenswrapper[4811]: I0219 11:12:23.491993 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0628070-a973-41bb-b1d7-737a572ac19d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 11:12:23 crc kubenswrapper[4811]: I0219 11:12:23.503832 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0628070-a973-41bb-b1d7-737a572ac19d-kube-api-access-bbvv6" (OuterVolumeSpecName: "kube-api-access-bbvv6") pod "f0628070-a973-41bb-b1d7-737a572ac19d" (UID: "f0628070-a973-41bb-b1d7-737a572ac19d"). InnerVolumeSpecName "kube-api-access-bbvv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:12:23 crc kubenswrapper[4811]: I0219 11:12:23.526991 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0628070-a973-41bb-b1d7-737a572ac19d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f0628070-a973-41bb-b1d7-737a572ac19d" (UID: "f0628070-a973-41bb-b1d7-737a572ac19d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:12:23 crc kubenswrapper[4811]: I0219 11:12:23.594211 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0628070-a973-41bb-b1d7-737a572ac19d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 11:12:23 crc kubenswrapper[4811]: I0219 11:12:23.594612 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbvv6\" (UniqueName: \"kubernetes.io/projected/f0628070-a973-41bb-b1d7-737a572ac19d-kube-api-access-bbvv6\") on node \"crc\" DevicePath \"\"" Feb 19 11:12:23 crc kubenswrapper[4811]: I0219 11:12:23.913350 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x8bvg" event={"ID":"088905b4-cbed-4b42-ae96-7f559999674e","Type":"ContainerStarted","Data":"ca03b99835e6fd8ffa4fe27afb26b8b228bf5d99225b1c1c30bd7d3066dde58e"} Feb 19 11:12:23 crc kubenswrapper[4811]: I0219 11:12:23.916152 4811 generic.go:334] "Generic (PLEG): container finished" podID="f0628070-a973-41bb-b1d7-737a572ac19d" containerID="b69548087e2128302cac4abb81f333727005c5e581c5951817b77af3c38de7f0" exitCode=0 Feb 19 11:12:23 crc kubenswrapper[4811]: I0219 11:12:23.916222 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvgsr" event={"ID":"f0628070-a973-41bb-b1d7-737a572ac19d","Type":"ContainerDied","Data":"b69548087e2128302cac4abb81f333727005c5e581c5951817b77af3c38de7f0"} Feb 19 11:12:23 crc kubenswrapper[4811]: I0219 11:12:23.916263 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvgsr" event={"ID":"f0628070-a973-41bb-b1d7-737a572ac19d","Type":"ContainerDied","Data":"c575b65a6aaf67478faed6667a1712848bffc0f365bfbd317c14ee649548e66a"} Feb 19 11:12:23 crc kubenswrapper[4811]: I0219 11:12:23.916288 4811 scope.go:117] "RemoveContainer" containerID="b69548087e2128302cac4abb81f333727005c5e581c5951817b77af3c38de7f0" Feb 19 11:12:23 crc kubenswrapper[4811]: I0219 11:12:23.916524 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qvgsr" Feb 19 11:12:23 crc kubenswrapper[4811]: I0219 11:12:23.939167 4811 scope.go:117] "RemoveContainer" containerID="9667911d8fd17ed8f10d29c1b7f10ff4bd8ef5dc8cb3dcd1c1a38bc8cc9153fb" Feb 19 11:12:23 crc kubenswrapper[4811]: I0219 11:12:23.956810 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x8bvg" podStartSLOduration=2.606361364 podStartE2EDuration="5.956778698s" podCreationTimestamp="2026-02-19 11:12:18 +0000 UTC" firstStartedPulling="2026-02-19 11:12:19.862839042 +0000 UTC m=+3828.389303728" lastFinishedPulling="2026-02-19 11:12:23.213256376 +0000 UTC m=+3831.739721062" observedRunningTime="2026-02-19 11:12:23.940305433 +0000 UTC m=+3832.466770119" watchObservedRunningTime="2026-02-19 11:12:23.956778698 +0000 UTC m=+3832.483243384" Feb 19 11:12:23 crc kubenswrapper[4811]: I0219 11:12:23.962492 4811 scope.go:117] "RemoveContainer" containerID="c9169adbdb165ac0658c0e3d38f1f904009cc4d94ff898617e61102fcafdb21f" Feb 19 11:12:23 crc kubenswrapper[4811]: I0219 11:12:23.992516 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qvgsr"] Feb 19 11:12:24 crc kubenswrapper[4811]: I0219 11:12:24.003769 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qvgsr"] Feb 19 11:12:24 crc kubenswrapper[4811]: I0219 11:12:24.012212 4811 scope.go:117] "RemoveContainer" containerID="b69548087e2128302cac4abb81f333727005c5e581c5951817b77af3c38de7f0" Feb 19 11:12:24 crc kubenswrapper[4811]: E0219 11:12:24.013477 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b69548087e2128302cac4abb81f333727005c5e581c5951817b77af3c38de7f0\": container with ID starting with b69548087e2128302cac4abb81f333727005c5e581c5951817b77af3c38de7f0 not found: ID does not exist" containerID="b69548087e2128302cac4abb81f333727005c5e581c5951817b77af3c38de7f0" Feb 19 11:12:24 crc kubenswrapper[4811]: I0219 11:12:24.013579 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b69548087e2128302cac4abb81f333727005c5e581c5951817b77af3c38de7f0"} err="failed to get container status \"b69548087e2128302cac4abb81f333727005c5e581c5951817b77af3c38de7f0\": rpc error: code = NotFound desc = could not find container \"b69548087e2128302cac4abb81f333727005c5e581c5951817b77af3c38de7f0\": container with ID starting with b69548087e2128302cac4abb81f333727005c5e581c5951817b77af3c38de7f0 not found: ID does not exist" Feb 19 11:12:24 crc kubenswrapper[4811]: I0219 11:12:24.013621 4811 scope.go:117] "RemoveContainer" containerID="9667911d8fd17ed8f10d29c1b7f10ff4bd8ef5dc8cb3dcd1c1a38bc8cc9153fb" Feb 19 11:12:24 crc kubenswrapper[4811]: E0219 11:12:24.013998 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9667911d8fd17ed8f10d29c1b7f10ff4bd8ef5dc8cb3dcd1c1a38bc8cc9153fb\": container with ID starting with 9667911d8fd17ed8f10d29c1b7f10ff4bd8ef5dc8cb3dcd1c1a38bc8cc9153fb not found: ID does not exist" containerID="9667911d8fd17ed8f10d29c1b7f10ff4bd8ef5dc8cb3dcd1c1a38bc8cc9153fb" Feb 19 11:12:24 crc kubenswrapper[4811]: I0219 11:12:24.014054 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9667911d8fd17ed8f10d29c1b7f10ff4bd8ef5dc8cb3dcd1c1a38bc8cc9153fb"} err="failed to get container status \"9667911d8fd17ed8f10d29c1b7f10ff4bd8ef5dc8cb3dcd1c1a38bc8cc9153fb\": rpc error: code = NotFound desc = could not find container \"9667911d8fd17ed8f10d29c1b7f10ff4bd8ef5dc8cb3dcd1c1a38bc8cc9153fb\": container with ID starting with 9667911d8fd17ed8f10d29c1b7f10ff4bd8ef5dc8cb3dcd1c1a38bc8cc9153fb not found: ID does not exist" Feb 19 11:12:24 crc kubenswrapper[4811]: I0219 11:12:24.014087 4811 scope.go:117] "RemoveContainer" containerID="c9169adbdb165ac0658c0e3d38f1f904009cc4d94ff898617e61102fcafdb21f" Feb 19 11:12:24 crc kubenswrapper[4811]: E0219 11:12:24.014299 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9169adbdb165ac0658c0e3d38f1f904009cc4d94ff898617e61102fcafdb21f\": container with ID starting with c9169adbdb165ac0658c0e3d38f1f904009cc4d94ff898617e61102fcafdb21f not found: ID does not exist" containerID="c9169adbdb165ac0658c0e3d38f1f904009cc4d94ff898617e61102fcafdb21f" Feb 19 11:12:24 crc kubenswrapper[4811]: I0219 11:12:24.014327 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9169adbdb165ac0658c0e3d38f1f904009cc4d94ff898617e61102fcafdb21f"} err="failed to get container status \"c9169adbdb165ac0658c0e3d38f1f904009cc4d94ff898617e61102fcafdb21f\": rpc error: code = NotFound desc = could not find container \"c9169adbdb165ac0658c0e3d38f1f904009cc4d94ff898617e61102fcafdb21f\": container with ID starting with c9169adbdb165ac0658c0e3d38f1f904009cc4d94ff898617e61102fcafdb21f not found: ID does not exist" Feb 19 11:12:24 crc kubenswrapper[4811]: I0219 11:12:24.598153 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0628070-a973-41bb-b1d7-737a572ac19d" path="/var/lib/kubelet/pods/f0628070-a973-41bb-b1d7-737a572ac19d/volumes" Feb 19 11:12:26 crc kubenswrapper[4811]: I0219 11:12:26.788403 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 11:12:26 crc kubenswrapper[4811]: I0219 11:12:26.788525 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 11:12:26 crc kubenswrapper[4811]: I0219 11:12:26.788596 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" Feb 19 11:12:26 crc kubenswrapper[4811]: I0219 11:12:26.790304 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"133c8091f936fab48e586b7bdce009236e4848e9a33aa81fe2541aee87912496"} pod="openshift-machine-config-operator/machine-config-daemon-v97zm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 11:12:26 crc kubenswrapper[4811]: I0219 11:12:26.790542 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" containerID="cri-o://133c8091f936fab48e586b7bdce009236e4848e9a33aa81fe2541aee87912496" gracePeriod=600 Feb 19 11:12:26 crc kubenswrapper[4811]: E0219 11:12:26.932129 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:12:26 crc kubenswrapper[4811]: I0219 11:12:26.953113 4811 generic.go:334] "Generic (PLEG): container finished" podID="e2e42c80-bece-4066-abad-05bc661d33f5" containerID="133c8091f936fab48e586b7bdce009236e4848e9a33aa81fe2541aee87912496" exitCode=0 Feb 19 11:12:26 crc kubenswrapper[4811]: I0219 11:12:26.953188 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" event={"ID":"e2e42c80-bece-4066-abad-05bc661d33f5","Type":"ContainerDied","Data":"133c8091f936fab48e586b7bdce009236e4848e9a33aa81fe2541aee87912496"} Feb 19 11:12:26 crc kubenswrapper[4811]: I0219 11:12:26.953280 4811 scope.go:117] "RemoveContainer" containerID="d5f586fcacc6004c20dfa70ee7a04ad9b63b7e2534b6e867fb032ccc9df2a399" Feb 19 11:12:26 crc kubenswrapper[4811]: I0219 11:12:26.954087 4811 scope.go:117] "RemoveContainer" containerID="133c8091f936fab48e586b7bdce009236e4848e9a33aa81fe2541aee87912496" Feb 19 11:12:26 crc kubenswrapper[4811]: E0219 11:12:26.954380 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:12:28 crc kubenswrapper[4811]: I0219 11:12:28.721667 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x8bvg" Feb 19 11:12:28 crc kubenswrapper[4811]: I0219 11:12:28.722009 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x8bvg" Feb 19 11:12:28 crc kubenswrapper[4811]: I0219 11:12:28.777401 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x8bvg" Feb 19 11:12:29 crc kubenswrapper[4811]: I0219 11:12:29.028536 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x8bvg" Feb 19 11:12:30 crc kubenswrapper[4811]: I0219 11:12:30.161423 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x8bvg"] Feb 19 11:12:30 crc kubenswrapper[4811]: I0219 11:12:30.994320 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x8bvg" podUID="088905b4-cbed-4b42-ae96-7f559999674e" containerName="registry-server" containerID="cri-o://ca03b99835e6fd8ffa4fe27afb26b8b228bf5d99225b1c1c30bd7d3066dde58e" gracePeriod=2 Feb 19 11:12:31 crc kubenswrapper[4811]: I0219 11:12:31.559924 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x8bvg" Feb 19 11:12:31 crc kubenswrapper[4811]: I0219 11:12:31.594038 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmms7\" (UniqueName: \"kubernetes.io/projected/088905b4-cbed-4b42-ae96-7f559999674e-kube-api-access-mmms7\") pod \"088905b4-cbed-4b42-ae96-7f559999674e\" (UID: \"088905b4-cbed-4b42-ae96-7f559999674e\") " Feb 19 11:12:31 crc kubenswrapper[4811]: I0219 11:12:31.594206 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/088905b4-cbed-4b42-ae96-7f559999674e-utilities\") pod \"088905b4-cbed-4b42-ae96-7f559999674e\" (UID: \"088905b4-cbed-4b42-ae96-7f559999674e\") " Feb 19 11:12:31 crc kubenswrapper[4811]: I0219 11:12:31.594443 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/088905b4-cbed-4b42-ae96-7f559999674e-catalog-content\") pod \"088905b4-cbed-4b42-ae96-7f559999674e\" (UID: \"088905b4-cbed-4b42-ae96-7f559999674e\") " Feb 19 11:12:31 crc kubenswrapper[4811]: I0219 11:12:31.595400 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/088905b4-cbed-4b42-ae96-7f559999674e-utilities" (OuterVolumeSpecName: "utilities") pod "088905b4-cbed-4b42-ae96-7f559999674e" (UID: "088905b4-cbed-4b42-ae96-7f559999674e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:12:31 crc kubenswrapper[4811]: I0219 11:12:31.602319 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/088905b4-cbed-4b42-ae96-7f559999674e-kube-api-access-mmms7" (OuterVolumeSpecName: "kube-api-access-mmms7") pod "088905b4-cbed-4b42-ae96-7f559999674e" (UID: "088905b4-cbed-4b42-ae96-7f559999674e"). InnerVolumeSpecName "kube-api-access-mmms7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:12:31 crc kubenswrapper[4811]: I0219 11:12:31.654391 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/088905b4-cbed-4b42-ae96-7f559999674e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "088905b4-cbed-4b42-ae96-7f559999674e" (UID: "088905b4-cbed-4b42-ae96-7f559999674e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:12:31 crc kubenswrapper[4811]: I0219 11:12:31.698213 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmms7\" (UniqueName: \"kubernetes.io/projected/088905b4-cbed-4b42-ae96-7f559999674e-kube-api-access-mmms7\") on node \"crc\" DevicePath \"\"" Feb 19 11:12:31 crc kubenswrapper[4811]: I0219 11:12:31.698271 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/088905b4-cbed-4b42-ae96-7f559999674e-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 11:12:31 crc kubenswrapper[4811]: I0219 11:12:31.698286 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/088905b4-cbed-4b42-ae96-7f559999674e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 11:12:32 crc kubenswrapper[4811]: I0219 11:12:32.010559 4811 generic.go:334] "Generic (PLEG): container finished" podID="088905b4-cbed-4b42-ae96-7f559999674e" containerID="ca03b99835e6fd8ffa4fe27afb26b8b228bf5d99225b1c1c30bd7d3066dde58e" exitCode=0 Feb 19 11:12:32 crc kubenswrapper[4811]: I0219 11:12:32.010582 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x8bvg" Feb 19 11:12:32 crc kubenswrapper[4811]: I0219 11:12:32.010616 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x8bvg" event={"ID":"088905b4-cbed-4b42-ae96-7f559999674e","Type":"ContainerDied","Data":"ca03b99835e6fd8ffa4fe27afb26b8b228bf5d99225b1c1c30bd7d3066dde58e"} Feb 19 11:12:32 crc kubenswrapper[4811]: I0219 11:12:32.010689 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x8bvg" event={"ID":"088905b4-cbed-4b42-ae96-7f559999674e","Type":"ContainerDied","Data":"dbb0f19eb0a632a7b9346f1c2ff5b55d0b820ee5e436f610b3336a2737381125"} Feb 19 11:12:32 crc kubenswrapper[4811]: I0219 11:12:32.010711 4811 scope.go:117] "RemoveContainer" containerID="ca03b99835e6fd8ffa4fe27afb26b8b228bf5d99225b1c1c30bd7d3066dde58e" Feb 19 11:12:32 crc kubenswrapper[4811]: I0219 11:12:32.035182 4811 scope.go:117] "RemoveContainer" containerID="03fd663a4ebde25de73d121c4290d6c081db188e1cdd40d202751c69b04b5bce" Feb 19 11:12:32 crc kubenswrapper[4811]: I0219 11:12:32.055315 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x8bvg"] Feb 19 11:12:32 crc kubenswrapper[4811]: I0219 11:12:32.063769 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x8bvg"] Feb 19 11:12:32 crc kubenswrapper[4811]: I0219 11:12:32.081061 4811 scope.go:117] "RemoveContainer" containerID="e18c93cce3ca65043439b6a12d254b75859c4388ad5059a69cdf2807b0fac7a1" Feb 19 11:12:32 crc kubenswrapper[4811]: I0219 11:12:32.117160 4811 scope.go:117] "RemoveContainer" containerID="ca03b99835e6fd8ffa4fe27afb26b8b228bf5d99225b1c1c30bd7d3066dde58e" Feb 19 11:12:32 crc kubenswrapper[4811]: E0219 11:12:32.117745 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca03b99835e6fd8ffa4fe27afb26b8b228bf5d99225b1c1c30bd7d3066dde58e\": container with ID starting with ca03b99835e6fd8ffa4fe27afb26b8b228bf5d99225b1c1c30bd7d3066dde58e not found: ID does not exist" containerID="ca03b99835e6fd8ffa4fe27afb26b8b228bf5d99225b1c1c30bd7d3066dde58e" Feb 19 11:12:32 crc kubenswrapper[4811]: I0219 11:12:32.117795 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca03b99835e6fd8ffa4fe27afb26b8b228bf5d99225b1c1c30bd7d3066dde58e"} err="failed to get container status \"ca03b99835e6fd8ffa4fe27afb26b8b228bf5d99225b1c1c30bd7d3066dde58e\": rpc error: code = NotFound desc = could not find container \"ca03b99835e6fd8ffa4fe27afb26b8b228bf5d99225b1c1c30bd7d3066dde58e\": container with ID starting with ca03b99835e6fd8ffa4fe27afb26b8b228bf5d99225b1c1c30bd7d3066dde58e not found: ID does not exist" Feb 19 11:12:32 crc kubenswrapper[4811]: I0219 11:12:32.117827 4811 scope.go:117] "RemoveContainer" containerID="03fd663a4ebde25de73d121c4290d6c081db188e1cdd40d202751c69b04b5bce" Feb 19 11:12:32 crc kubenswrapper[4811]: E0219 11:12:32.118405 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03fd663a4ebde25de73d121c4290d6c081db188e1cdd40d202751c69b04b5bce\": container with ID starting with 03fd663a4ebde25de73d121c4290d6c081db188e1cdd40d202751c69b04b5bce not found: ID does not exist" containerID="03fd663a4ebde25de73d121c4290d6c081db188e1cdd40d202751c69b04b5bce" Feb 19 11:12:32 crc kubenswrapper[4811]: I0219 11:12:32.118433 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03fd663a4ebde25de73d121c4290d6c081db188e1cdd40d202751c69b04b5bce"} err="failed to get container status \"03fd663a4ebde25de73d121c4290d6c081db188e1cdd40d202751c69b04b5bce\": rpc error: code = NotFound desc = could not find container \"03fd663a4ebde25de73d121c4290d6c081db188e1cdd40d202751c69b04b5bce\": container with ID starting with 03fd663a4ebde25de73d121c4290d6c081db188e1cdd40d202751c69b04b5bce not found: ID does not exist" Feb 19 11:12:32 crc kubenswrapper[4811]: I0219 11:12:32.118465 4811 scope.go:117] "RemoveContainer" containerID="e18c93cce3ca65043439b6a12d254b75859c4388ad5059a69cdf2807b0fac7a1" Feb 19 11:12:32 crc kubenswrapper[4811]: E0219 11:12:32.119121 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e18c93cce3ca65043439b6a12d254b75859c4388ad5059a69cdf2807b0fac7a1\": container with ID starting with e18c93cce3ca65043439b6a12d254b75859c4388ad5059a69cdf2807b0fac7a1 not found: ID does not exist" containerID="e18c93cce3ca65043439b6a12d254b75859c4388ad5059a69cdf2807b0fac7a1" Feb 19 11:12:32 crc kubenswrapper[4811]: I0219 11:12:32.119186 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e18c93cce3ca65043439b6a12d254b75859c4388ad5059a69cdf2807b0fac7a1"} err="failed to get container status \"e18c93cce3ca65043439b6a12d254b75859c4388ad5059a69cdf2807b0fac7a1\": rpc error: code = NotFound desc = could not find container \"e18c93cce3ca65043439b6a12d254b75859c4388ad5059a69cdf2807b0fac7a1\": container with ID starting with e18c93cce3ca65043439b6a12d254b75859c4388ad5059a69cdf2807b0fac7a1 not found: ID does not exist" Feb 19 11:12:32 crc kubenswrapper[4811]: I0219 11:12:32.599527 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="088905b4-cbed-4b42-ae96-7f559999674e" path="/var/lib/kubelet/pods/088905b4-cbed-4b42-ae96-7f559999674e/volumes" Feb 19 11:12:40 crc kubenswrapper[4811]: I0219 11:12:40.583631 4811 scope.go:117] "RemoveContainer" containerID="133c8091f936fab48e586b7bdce009236e4848e9a33aa81fe2541aee87912496" Feb 19 11:12:40 crc kubenswrapper[4811]: E0219 11:12:40.584407 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:12:53 crc kubenswrapper[4811]: I0219 11:12:53.583792 4811 scope.go:117] "RemoveContainer" containerID="133c8091f936fab48e586b7bdce009236e4848e9a33aa81fe2541aee87912496" Feb 19 11:12:53 crc kubenswrapper[4811]: E0219 11:12:53.584584 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:13:06 crc kubenswrapper[4811]: I0219 11:13:06.582954 4811 scope.go:117] "RemoveContainer" containerID="133c8091f936fab48e586b7bdce009236e4848e9a33aa81fe2541aee87912496" Feb 19 11:13:06 crc kubenswrapper[4811]: E0219 11:13:06.583591 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:13:11 crc kubenswrapper[4811]: I0219 11:13:11.732032 4811 scope.go:117] "RemoveContainer" containerID="c06a9ab57b361aed0fb06be2148ae3363fa762e1794e3135ee5103ba16b0e748" Feb 19 11:13:17 crc kubenswrapper[4811]: I0219 11:13:17.583630 4811 scope.go:117] "RemoveContainer" containerID="133c8091f936fab48e586b7bdce009236e4848e9a33aa81fe2541aee87912496" Feb 19 11:13:17 crc kubenswrapper[4811]: E0219 11:13:17.584322 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:13:28 crc kubenswrapper[4811]: I0219 11:13:28.584066 4811 scope.go:117] "RemoveContainer" containerID="133c8091f936fab48e586b7bdce009236e4848e9a33aa81fe2541aee87912496" Feb 19 11:13:28 crc kubenswrapper[4811]: E0219 11:13:28.585083 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:13:39 crc kubenswrapper[4811]: I0219 11:13:39.584065 4811 scope.go:117] "RemoveContainer" containerID="133c8091f936fab48e586b7bdce009236e4848e9a33aa81fe2541aee87912496" Feb 19 11:13:39 crc kubenswrapper[4811]: E0219 11:13:39.585394 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:13:53 crc kubenswrapper[4811]: I0219 11:13:53.583555 4811 scope.go:117] "RemoveContainer" containerID="133c8091f936fab48e586b7bdce009236e4848e9a33aa81fe2541aee87912496" Feb 19 11:13:53 crc kubenswrapper[4811]: E0219 11:13:53.584181 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:14:06 crc kubenswrapper[4811]: I0219 11:14:06.584168 4811 scope.go:117] "RemoveContainer" containerID="133c8091f936fab48e586b7bdce009236e4848e9a33aa81fe2541aee87912496" Feb 19 11:14:06 crc kubenswrapper[4811]: E0219 11:14:06.585083 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:14:21 crc kubenswrapper[4811]: I0219 11:14:21.583665 4811 scope.go:117] "RemoveContainer" containerID="133c8091f936fab48e586b7bdce009236e4848e9a33aa81fe2541aee87912496" Feb 19 11:14:21 crc kubenswrapper[4811]: E0219 11:14:21.584770 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:14:32 crc kubenswrapper[4811]: I0219 11:14:32.589155 4811 scope.go:117] "RemoveContainer" containerID="133c8091f936fab48e586b7bdce009236e4848e9a33aa81fe2541aee87912496" Feb 19 11:14:32 crc kubenswrapper[4811]: E0219 11:14:32.590039 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:14:46 crc kubenswrapper[4811]: I0219 11:14:46.601136 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fj7sq/must-gather-842pr"] Feb 19 11:14:46 crc kubenswrapper[4811]: E0219 11:14:46.601981 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="088905b4-cbed-4b42-ae96-7f559999674e" containerName="extract-utilities" Feb 19 11:14:46 crc kubenswrapper[4811]: I0219 11:14:46.601996 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="088905b4-cbed-4b42-ae96-7f559999674e" containerName="extract-utilities" Feb 19 11:14:46 crc kubenswrapper[4811]: E0219 11:14:46.602015 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="088905b4-cbed-4b42-ae96-7f559999674e" containerName="extract-content" Feb 19 11:14:46 crc kubenswrapper[4811]: I0219 11:14:46.602022 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="088905b4-cbed-4b42-ae96-7f559999674e" containerName="extract-content" Feb 19 11:14:46 crc kubenswrapper[4811]: E0219 11:14:46.602040 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="088905b4-cbed-4b42-ae96-7f559999674e" containerName="registry-server" Feb 19 11:14:46 crc kubenswrapper[4811]: I0219 11:14:46.602045 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="088905b4-cbed-4b42-ae96-7f559999674e" containerName="registry-server" Feb 19 11:14:46 crc kubenswrapper[4811]: E0219 11:14:46.602068 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0628070-a973-41bb-b1d7-737a572ac19d" containerName="extract-utilities" Feb 19 11:14:46 crc kubenswrapper[4811]: I0219 11:14:46.602076 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0628070-a973-41bb-b1d7-737a572ac19d" containerName="extract-utilities" Feb 19 11:14:46 crc kubenswrapper[4811]: E0219 11:14:46.602091 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0628070-a973-41bb-b1d7-737a572ac19d" containerName="extract-content" Feb 19 11:14:46 crc kubenswrapper[4811]: I0219 11:14:46.602097 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0628070-a973-41bb-b1d7-737a572ac19d" containerName="extract-content" Feb 19 11:14:46 crc kubenswrapper[4811]: E0219 11:14:46.602109 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0628070-a973-41bb-b1d7-737a572ac19d" containerName="registry-server" Feb 19 11:14:46 crc kubenswrapper[4811]: I0219 11:14:46.602114 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0628070-a973-41bb-b1d7-737a572ac19d" containerName="registry-server" Feb 19 11:14:46 crc kubenswrapper[4811]: I0219 11:14:46.602290 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="088905b4-cbed-4b42-ae96-7f559999674e" containerName="registry-server" Feb 19 11:14:46 crc kubenswrapper[4811]: I0219 11:14:46.602314 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0628070-a973-41bb-b1d7-737a572ac19d" containerName="registry-server" Feb 19 11:14:46 crc kubenswrapper[4811]: I0219 11:14:46.603306 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fj7sq/must-gather-842pr" Feb 19 11:14:46 crc kubenswrapper[4811]: I0219 11:14:46.611803 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-fj7sq"/"kube-root-ca.crt" Feb 19 11:14:46 crc kubenswrapper[4811]: I0219 11:14:46.611944 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-fj7sq"/"default-dockercfg-t5brq" Feb 19 11:14:46 crc kubenswrapper[4811]: I0219 11:14:46.612041 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-fj7sq"/"openshift-service-ca.crt" Feb 19 11:14:46 crc kubenswrapper[4811]: I0219 11:14:46.623963 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-fj7sq/must-gather-842pr"] Feb 19 11:14:46 crc kubenswrapper[4811]: I0219 11:14:46.638076 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fd594f5c-e579-4c78-a29a-6ea266fd98d0-must-gather-output\") pod \"must-gather-842pr\" (UID: \"fd594f5c-e579-4c78-a29a-6ea266fd98d0\") " pod="openshift-must-gather-fj7sq/must-gather-842pr" Feb 19 11:14:46 crc kubenswrapper[4811]: I0219 11:14:46.638292 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phf26\" (UniqueName: \"kubernetes.io/projected/fd594f5c-e579-4c78-a29a-6ea266fd98d0-kube-api-access-phf26\") pod \"must-gather-842pr\" (UID: \"fd594f5c-e579-4c78-a29a-6ea266fd98d0\") " pod="openshift-must-gather-fj7sq/must-gather-842pr" Feb 19 11:14:46 crc kubenswrapper[4811]: I0219 11:14:46.741837 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fd594f5c-e579-4c78-a29a-6ea266fd98d0-must-gather-output\") pod \"must-gather-842pr\" (UID: \"fd594f5c-e579-4c78-a29a-6ea266fd98d0\") " pod="openshift-must-gather-fj7sq/must-gather-842pr" Feb 19 11:14:46 crc kubenswrapper[4811]: I0219 11:14:46.741978 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phf26\" (UniqueName: \"kubernetes.io/projected/fd594f5c-e579-4c78-a29a-6ea266fd98d0-kube-api-access-phf26\") pod \"must-gather-842pr\" (UID: \"fd594f5c-e579-4c78-a29a-6ea266fd98d0\") " pod="openshift-must-gather-fj7sq/must-gather-842pr" Feb 19 11:14:46 crc kubenswrapper[4811]: I0219 11:14:46.742427 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fd594f5c-e579-4c78-a29a-6ea266fd98d0-must-gather-output\") pod \"must-gather-842pr\" (UID: \"fd594f5c-e579-4c78-a29a-6ea266fd98d0\") " pod="openshift-must-gather-fj7sq/must-gather-842pr" Feb 19 11:14:46 crc kubenswrapper[4811]: I0219 11:14:46.765753 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phf26\" (UniqueName: \"kubernetes.io/projected/fd594f5c-e579-4c78-a29a-6ea266fd98d0-kube-api-access-phf26\") pod \"must-gather-842pr\" (UID: \"fd594f5c-e579-4c78-a29a-6ea266fd98d0\") " pod="openshift-must-gather-fj7sq/must-gather-842pr" Feb 19 11:14:46 crc kubenswrapper[4811]: I0219 11:14:46.929154 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fj7sq/must-gather-842pr" Feb 19 11:14:47 crc kubenswrapper[4811]: I0219 11:14:47.452741 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-fj7sq/must-gather-842pr"] Feb 19 11:14:47 crc kubenswrapper[4811]: I0219 11:14:47.584657 4811 scope.go:117] "RemoveContainer" containerID="133c8091f936fab48e586b7bdce009236e4848e9a33aa81fe2541aee87912496" Feb 19 11:14:47 crc kubenswrapper[4811]: E0219 11:14:47.585099 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:14:48 crc kubenswrapper[4811]: I0219 11:14:48.290074 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fj7sq/must-gather-842pr" event={"ID":"fd594f5c-e579-4c78-a29a-6ea266fd98d0","Type":"ContainerStarted","Data":"386d30eb6bd51cfe4d24fc3dda6ae2b10bf5d73bb0f4aa1f2d548c2f724bb688"} Feb 19 11:14:48 crc kubenswrapper[4811]: I0219 11:14:48.290487 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fj7sq/must-gather-842pr" event={"ID":"fd594f5c-e579-4c78-a29a-6ea266fd98d0","Type":"ContainerStarted","Data":"2efc4e02dd4cf932af3fb38642c88501555323989adb2a9d1dc26a2048d50272"} Feb 19 11:14:48 crc kubenswrapper[4811]: I0219 11:14:48.290504 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fj7sq/must-gather-842pr" event={"ID":"fd594f5c-e579-4c78-a29a-6ea266fd98d0","Type":"ContainerStarted","Data":"8a50f47ba3234b9b5d4869a2976c651853b89b4a659026837f73a922b3df1f79"} Feb 19 11:14:48 crc kubenswrapper[4811]: I0219 11:14:48.314059 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-fj7sq/must-gather-842pr" podStartSLOduration=2.314018703 podStartE2EDuration="2.314018703s" podCreationTimestamp="2026-02-19 11:14:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 11:14:48.30517151 +0000 UTC m=+3976.831636196" watchObservedRunningTime="2026-02-19 11:14:48.314018703 +0000 UTC m=+3976.840483409" Feb 19 11:14:51 crc kubenswrapper[4811]: I0219 11:14:51.732860 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fj7sq/crc-debug-j5qb4"] Feb 19 11:14:51 crc kubenswrapper[4811]: I0219 11:14:51.735215 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fj7sq/crc-debug-j5qb4" Feb 19 11:14:51 crc kubenswrapper[4811]: I0219 11:14:51.854117 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmrb6\" (UniqueName: \"kubernetes.io/projected/c8b7ade7-bf7f-44bf-a79a-3e8d83bc62fe-kube-api-access-nmrb6\") pod \"crc-debug-j5qb4\" (UID: \"c8b7ade7-bf7f-44bf-a79a-3e8d83bc62fe\") " pod="openshift-must-gather-fj7sq/crc-debug-j5qb4" Feb 19 11:14:51 crc kubenswrapper[4811]: I0219 11:14:51.854280 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c8b7ade7-bf7f-44bf-a79a-3e8d83bc62fe-host\") pod \"crc-debug-j5qb4\" (UID: \"c8b7ade7-bf7f-44bf-a79a-3e8d83bc62fe\") " pod="openshift-must-gather-fj7sq/crc-debug-j5qb4" Feb 19 11:14:51 crc kubenswrapper[4811]: I0219 11:14:51.956198 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmrb6\" (UniqueName: \"kubernetes.io/projected/c8b7ade7-bf7f-44bf-a79a-3e8d83bc62fe-kube-api-access-nmrb6\") pod \"crc-debug-j5qb4\" (UID: \"c8b7ade7-bf7f-44bf-a79a-3e8d83bc62fe\") " pod="openshift-must-gather-fj7sq/crc-debug-j5qb4" Feb 19 11:14:51 crc kubenswrapper[4811]: I0219 11:14:51.956909 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c8b7ade7-bf7f-44bf-a79a-3e8d83bc62fe-host\") pod \"crc-debug-j5qb4\" (UID: \"c8b7ade7-bf7f-44bf-a79a-3e8d83bc62fe\") " pod="openshift-must-gather-fj7sq/crc-debug-j5qb4" Feb 19 11:14:51 crc kubenswrapper[4811]: I0219 11:14:51.957014 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c8b7ade7-bf7f-44bf-a79a-3e8d83bc62fe-host\") pod \"crc-debug-j5qb4\" (UID: \"c8b7ade7-bf7f-44bf-a79a-3e8d83bc62fe\") " pod="openshift-must-gather-fj7sq/crc-debug-j5qb4" Feb 19 11:14:51 crc kubenswrapper[4811]: I0219 11:14:51.984649 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmrb6\" (UniqueName: \"kubernetes.io/projected/c8b7ade7-bf7f-44bf-a79a-3e8d83bc62fe-kube-api-access-nmrb6\") pod \"crc-debug-j5qb4\" (UID: \"c8b7ade7-bf7f-44bf-a79a-3e8d83bc62fe\") " pod="openshift-must-gather-fj7sq/crc-debug-j5qb4" Feb 19 11:14:52 crc kubenswrapper[4811]: I0219 11:14:52.058444 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fj7sq/crc-debug-j5qb4" Feb 19 11:14:52 crc kubenswrapper[4811]: W0219 11:14:52.087762 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8b7ade7_bf7f_44bf_a79a_3e8d83bc62fe.slice/crio-a6af4a7b46cf6acb8a64d6f30ac66be8f7c5f999a548772026afc62201bd2928 WatchSource:0}: Error finding container a6af4a7b46cf6acb8a64d6f30ac66be8f7c5f999a548772026afc62201bd2928: Status 404 returned error can't find the container with id a6af4a7b46cf6acb8a64d6f30ac66be8f7c5f999a548772026afc62201bd2928 Feb 19 11:14:52 crc kubenswrapper[4811]: I0219 11:14:52.328158 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fj7sq/crc-debug-j5qb4" event={"ID":"c8b7ade7-bf7f-44bf-a79a-3e8d83bc62fe","Type":"ContainerStarted","Data":"b7b881e060f1baf914d1e036335e32d77a20fc5cec7a9d82ed33c1bacc2235f4"} Feb 19 11:14:52 crc kubenswrapper[4811]: I0219 11:14:52.328226 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fj7sq/crc-debug-j5qb4" event={"ID":"c8b7ade7-bf7f-44bf-a79a-3e8d83bc62fe","Type":"ContainerStarted","Data":"a6af4a7b46cf6acb8a64d6f30ac66be8f7c5f999a548772026afc62201bd2928"} Feb 19 11:14:52 crc kubenswrapper[4811]: I0219 11:14:52.350645 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-fj7sq/crc-debug-j5qb4" podStartSLOduration=1.350613883 podStartE2EDuration="1.350613883s" podCreationTimestamp="2026-02-19 11:14:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 11:14:52.340896057 +0000 UTC m=+3980.867360753" watchObservedRunningTime="2026-02-19 11:14:52.350613883 +0000 UTC m=+3980.877078569" Feb 19 11:15:00 crc kubenswrapper[4811]: I0219 11:15:00.208377 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524995-kh26q"] Feb 19 11:15:00 crc kubenswrapper[4811]: I0219 11:15:00.212560 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524995-kh26q" Feb 19 11:15:00 crc kubenswrapper[4811]: I0219 11:15:00.216818 4811 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 11:15:00 crc kubenswrapper[4811]: I0219 11:15:00.217596 4811 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 11:15:00 crc kubenswrapper[4811]: I0219 11:15:00.233432 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524995-kh26q"] Feb 19 11:15:00 crc kubenswrapper[4811]: I0219 11:15:00.252894 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/01434bea-6fa9-4f9a-9e45-483438a11f69-secret-volume\") pod \"collect-profiles-29524995-kh26q\" (UID: \"01434bea-6fa9-4f9a-9e45-483438a11f69\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524995-kh26q" Feb 19 11:15:00 crc kubenswrapper[4811]: I0219 11:15:00.253101 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01434bea-6fa9-4f9a-9e45-483438a11f69-config-volume\") pod \"collect-profiles-29524995-kh26q\" (UID: \"01434bea-6fa9-4f9a-9e45-483438a11f69\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524995-kh26q" Feb 19 11:15:00 crc kubenswrapper[4811]: I0219 11:15:00.253126 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn8q7\" (UniqueName: \"kubernetes.io/projected/01434bea-6fa9-4f9a-9e45-483438a11f69-kube-api-access-hn8q7\") pod \"collect-profiles-29524995-kh26q\" (UID: \"01434bea-6fa9-4f9a-9e45-483438a11f69\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524995-kh26q" Feb 19 11:15:00 crc kubenswrapper[4811]: I0219 11:15:00.355098 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01434bea-6fa9-4f9a-9e45-483438a11f69-config-volume\") pod \"collect-profiles-29524995-kh26q\" (UID: \"01434bea-6fa9-4f9a-9e45-483438a11f69\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524995-kh26q" Feb 19 11:15:00 crc kubenswrapper[4811]: I0219 11:15:00.355159 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn8q7\" (UniqueName: \"kubernetes.io/projected/01434bea-6fa9-4f9a-9e45-483438a11f69-kube-api-access-hn8q7\") pod \"collect-profiles-29524995-kh26q\" (UID: \"01434bea-6fa9-4f9a-9e45-483438a11f69\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524995-kh26q" Feb 19 11:15:00 crc kubenswrapper[4811]: I0219 11:15:00.355218 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/01434bea-6fa9-4f9a-9e45-483438a11f69-secret-volume\") pod \"collect-profiles-29524995-kh26q\" (UID: \"01434bea-6fa9-4f9a-9e45-483438a11f69\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524995-kh26q" Feb 19 11:15:00 crc kubenswrapper[4811]: I0219 11:15:00.356660 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01434bea-6fa9-4f9a-9e45-483438a11f69-config-volume\") pod \"collect-profiles-29524995-kh26q\" (UID: \"01434bea-6fa9-4f9a-9e45-483438a11f69\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524995-kh26q" Feb 19 11:15:00 crc kubenswrapper[4811]: I0219 11:15:00.362675 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/01434bea-6fa9-4f9a-9e45-483438a11f69-secret-volume\") pod \"collect-profiles-29524995-kh26q\" (UID: \"01434bea-6fa9-4f9a-9e45-483438a11f69\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524995-kh26q" Feb 19 11:15:00 crc kubenswrapper[4811]: I0219 11:15:00.375411 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn8q7\" (UniqueName: \"kubernetes.io/projected/01434bea-6fa9-4f9a-9e45-483438a11f69-kube-api-access-hn8q7\") pod \"collect-profiles-29524995-kh26q\" (UID: \"01434bea-6fa9-4f9a-9e45-483438a11f69\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524995-kh26q" Feb 19 11:15:01 crc kubenswrapper[4811]: I0219 11:15:01.583281 4811 scope.go:117] "RemoveContainer" containerID="133c8091f936fab48e586b7bdce009236e4848e9a33aa81fe2541aee87912496" Feb 19 11:15:01 crc kubenswrapper[4811]: E0219 11:15:01.583934 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:15:01 crc kubenswrapper[4811]: I0219 11:15:01.703908 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524995-kh26q" Feb 19 11:15:02 crc kubenswrapper[4811]: I0219 11:15:02.279965 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524995-kh26q"] Feb 19 11:15:02 crc kubenswrapper[4811]: I0219 11:15:02.422581 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524995-kh26q" event={"ID":"01434bea-6fa9-4f9a-9e45-483438a11f69","Type":"ContainerStarted","Data":"dece8de5e26e8c157fba19c8e69df1063ebfec4452c897edb8973f6085d56af1"} Feb 19 11:15:03 crc kubenswrapper[4811]: I0219 11:15:03.443834 4811 generic.go:334] "Generic (PLEG): container finished" podID="01434bea-6fa9-4f9a-9e45-483438a11f69" containerID="473055fdc07cd628c9a42b173f3996265e5118c0b0ebe9e8202e1c2ecbb9abe4" exitCode=0 Feb 19 11:15:03 crc kubenswrapper[4811]: I0219 11:15:03.444368 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524995-kh26q" event={"ID":"01434bea-6fa9-4f9a-9e45-483438a11f69","Type":"ContainerDied","Data":"473055fdc07cd628c9a42b173f3996265e5118c0b0ebe9e8202e1c2ecbb9abe4"} Feb 19 11:15:04 crc kubenswrapper[4811]: I0219 11:15:04.905616 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524995-kh26q" Feb 19 11:15:05 crc kubenswrapper[4811]: I0219 11:15:05.052574 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/01434bea-6fa9-4f9a-9e45-483438a11f69-secret-volume\") pod \"01434bea-6fa9-4f9a-9e45-483438a11f69\" (UID: \"01434bea-6fa9-4f9a-9e45-483438a11f69\") " Feb 19 11:15:05 crc kubenswrapper[4811]: I0219 11:15:05.052800 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01434bea-6fa9-4f9a-9e45-483438a11f69-config-volume\") pod \"01434bea-6fa9-4f9a-9e45-483438a11f69\" (UID: \"01434bea-6fa9-4f9a-9e45-483438a11f69\") " Feb 19 11:15:05 crc kubenswrapper[4811]: I0219 11:15:05.053005 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hn8q7\" (UniqueName: \"kubernetes.io/projected/01434bea-6fa9-4f9a-9e45-483438a11f69-kube-api-access-hn8q7\") pod \"01434bea-6fa9-4f9a-9e45-483438a11f69\" (UID: \"01434bea-6fa9-4f9a-9e45-483438a11f69\") " Feb 19 11:15:05 crc kubenswrapper[4811]: I0219 11:15:05.053422 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01434bea-6fa9-4f9a-9e45-483438a11f69-config-volume" (OuterVolumeSpecName: "config-volume") pod "01434bea-6fa9-4f9a-9e45-483438a11f69" (UID: "01434bea-6fa9-4f9a-9e45-483438a11f69"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 11:15:05 crc kubenswrapper[4811]: I0219 11:15:05.053650 4811 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01434bea-6fa9-4f9a-9e45-483438a11f69-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 11:15:05 crc kubenswrapper[4811]: I0219 11:15:05.059634 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01434bea-6fa9-4f9a-9e45-483438a11f69-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "01434bea-6fa9-4f9a-9e45-483438a11f69" (UID: "01434bea-6fa9-4f9a-9e45-483438a11f69"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 11:15:05 crc kubenswrapper[4811]: I0219 11:15:05.063126 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01434bea-6fa9-4f9a-9e45-483438a11f69-kube-api-access-hn8q7" (OuterVolumeSpecName: "kube-api-access-hn8q7") pod "01434bea-6fa9-4f9a-9e45-483438a11f69" (UID: "01434bea-6fa9-4f9a-9e45-483438a11f69"). InnerVolumeSpecName "kube-api-access-hn8q7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:15:05 crc kubenswrapper[4811]: I0219 11:15:05.155337 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hn8q7\" (UniqueName: \"kubernetes.io/projected/01434bea-6fa9-4f9a-9e45-483438a11f69-kube-api-access-hn8q7\") on node \"crc\" DevicePath \"\"" Feb 19 11:15:05 crc kubenswrapper[4811]: I0219 11:15:05.155384 4811 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/01434bea-6fa9-4f9a-9e45-483438a11f69-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 11:15:05 crc kubenswrapper[4811]: I0219 11:15:05.465324 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524995-kh26q" event={"ID":"01434bea-6fa9-4f9a-9e45-483438a11f69","Type":"ContainerDied","Data":"dece8de5e26e8c157fba19c8e69df1063ebfec4452c897edb8973f6085d56af1"} Feb 19 11:15:05 crc kubenswrapper[4811]: I0219 11:15:05.465385 4811 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dece8de5e26e8c157fba19c8e69df1063ebfec4452c897edb8973f6085d56af1" Feb 19 11:15:05 crc kubenswrapper[4811]: I0219 11:15:05.465390 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524995-kh26q" Feb 19 11:15:05 crc kubenswrapper[4811]: I0219 11:15:05.990924 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524950-5mlkb"] Feb 19 11:15:06 crc kubenswrapper[4811]: I0219 11:15:06.000228 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524950-5mlkb"] Feb 19 11:15:06 crc kubenswrapper[4811]: I0219 11:15:06.595322 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61a69487-4b64-4509-85a9-0600a5b4e958" path="/var/lib/kubelet/pods/61a69487-4b64-4509-85a9-0600a5b4e958/volumes" Feb 19 11:15:11 crc kubenswrapper[4811]: I0219 11:15:11.836902 4811 scope.go:117] "RemoveContainer" containerID="22ead511b2c643cb1f40b13dea790ef572e7e2eedf491d311e21a0b5d81706ec" Feb 19 11:15:15 crc kubenswrapper[4811]: I0219 11:15:15.583625 4811 scope.go:117] "RemoveContainer" containerID="133c8091f936fab48e586b7bdce009236e4848e9a33aa81fe2541aee87912496" Feb 19 11:15:15 crc kubenswrapper[4811]: E0219 11:15:15.584983 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:15:26 crc kubenswrapper[4811]: I0219 11:15:26.583554 4811 scope.go:117] "RemoveContainer" containerID="133c8091f936fab48e586b7bdce009236e4848e9a33aa81fe2541aee87912496" Feb 19 11:15:26 crc kubenswrapper[4811]: E0219 11:15:26.584411 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:15:28 crc kubenswrapper[4811]: I0219 11:15:28.710898 4811 generic.go:334] "Generic (PLEG): container finished" podID="c8b7ade7-bf7f-44bf-a79a-3e8d83bc62fe" containerID="b7b881e060f1baf914d1e036335e32d77a20fc5cec7a9d82ed33c1bacc2235f4" exitCode=0 Feb 19 11:15:28 crc kubenswrapper[4811]: I0219 11:15:28.711033 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fj7sq/crc-debug-j5qb4" event={"ID":"c8b7ade7-bf7f-44bf-a79a-3e8d83bc62fe","Type":"ContainerDied","Data":"b7b881e060f1baf914d1e036335e32d77a20fc5cec7a9d82ed33c1bacc2235f4"} Feb 19 11:15:30 crc kubenswrapper[4811]: I0219 11:15:30.239103 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fj7sq/crc-debug-j5qb4" Feb 19 11:15:30 crc kubenswrapper[4811]: I0219 11:15:30.274730 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-fj7sq/crc-debug-j5qb4"] Feb 19 11:15:30 crc kubenswrapper[4811]: I0219 11:15:30.282407 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-fj7sq/crc-debug-j5qb4"] Feb 19 11:15:30 crc kubenswrapper[4811]: I0219 11:15:30.370041 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c8b7ade7-bf7f-44bf-a79a-3e8d83bc62fe-host\") pod \"c8b7ade7-bf7f-44bf-a79a-3e8d83bc62fe\" (UID: \"c8b7ade7-bf7f-44bf-a79a-3e8d83bc62fe\") " Feb 19 11:15:30 crc kubenswrapper[4811]: I0219 11:15:30.370151 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmrb6\" (UniqueName: \"kubernetes.io/projected/c8b7ade7-bf7f-44bf-a79a-3e8d83bc62fe-kube-api-access-nmrb6\") pod \"c8b7ade7-bf7f-44bf-a79a-3e8d83bc62fe\" (UID: \"c8b7ade7-bf7f-44bf-a79a-3e8d83bc62fe\") " Feb 19 11:15:30 crc kubenswrapper[4811]: I0219 11:15:30.370237 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8b7ade7-bf7f-44bf-a79a-3e8d83bc62fe-host" (OuterVolumeSpecName: "host") pod "c8b7ade7-bf7f-44bf-a79a-3e8d83bc62fe" (UID: "c8b7ade7-bf7f-44bf-a79a-3e8d83bc62fe"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 11:15:30 crc kubenswrapper[4811]: I0219 11:15:30.370715 4811 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c8b7ade7-bf7f-44bf-a79a-3e8d83bc62fe-host\") on node \"crc\" DevicePath \"\"" Feb 19 11:15:30 crc kubenswrapper[4811]: I0219 11:15:30.378626 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8b7ade7-bf7f-44bf-a79a-3e8d83bc62fe-kube-api-access-nmrb6" (OuterVolumeSpecName: "kube-api-access-nmrb6") pod "c8b7ade7-bf7f-44bf-a79a-3e8d83bc62fe" (UID: "c8b7ade7-bf7f-44bf-a79a-3e8d83bc62fe"). InnerVolumeSpecName "kube-api-access-nmrb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:15:30 crc kubenswrapper[4811]: I0219 11:15:30.472282 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmrb6\" (UniqueName: \"kubernetes.io/projected/c8b7ade7-bf7f-44bf-a79a-3e8d83bc62fe-kube-api-access-nmrb6\") on node \"crc\" DevicePath \"\"" Feb 19 11:15:30 crc kubenswrapper[4811]: I0219 11:15:30.594525 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8b7ade7-bf7f-44bf-a79a-3e8d83bc62fe" path="/var/lib/kubelet/pods/c8b7ade7-bf7f-44bf-a79a-3e8d83bc62fe/volumes" Feb 19 11:15:30 crc kubenswrapper[4811]: I0219 11:15:30.758042 4811 scope.go:117] "RemoveContainer" containerID="b7b881e060f1baf914d1e036335e32d77a20fc5cec7a9d82ed33c1bacc2235f4" Feb 19 11:15:30 crc kubenswrapper[4811]: I0219 11:15:30.758092 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fj7sq/crc-debug-j5qb4" Feb 19 11:15:31 crc kubenswrapper[4811]: I0219 11:15:31.585081 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fj7sq/crc-debug-5ht2q"] Feb 19 11:15:31 crc kubenswrapper[4811]: E0219 11:15:31.586642 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01434bea-6fa9-4f9a-9e45-483438a11f69" containerName="collect-profiles" Feb 19 11:15:31 crc kubenswrapper[4811]: I0219 11:15:31.586759 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="01434bea-6fa9-4f9a-9e45-483438a11f69" containerName="collect-profiles" Feb 19 11:15:31 crc kubenswrapper[4811]: E0219 11:15:31.586896 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8b7ade7-bf7f-44bf-a79a-3e8d83bc62fe" containerName="container-00" Feb 19 11:15:31 crc kubenswrapper[4811]: I0219 11:15:31.586996 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8b7ade7-bf7f-44bf-a79a-3e8d83bc62fe" containerName="container-00" Feb 19 11:15:31 crc kubenswrapper[4811]: I0219 11:15:31.587317 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="01434bea-6fa9-4f9a-9e45-483438a11f69" containerName="collect-profiles" Feb 19 11:15:31 crc kubenswrapper[4811]: I0219 11:15:31.587434 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8b7ade7-bf7f-44bf-a79a-3e8d83bc62fe" containerName="container-00" Feb 19 11:15:31 crc kubenswrapper[4811]: I0219 11:15:31.588407 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fj7sq/crc-debug-5ht2q" Feb 19 11:15:31 crc kubenswrapper[4811]: I0219 11:15:31.696493 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgm9v\" (UniqueName: \"kubernetes.io/projected/32ba2013-9e49-4542-a5a4-93c07a04e207-kube-api-access-hgm9v\") pod \"crc-debug-5ht2q\" (UID: \"32ba2013-9e49-4542-a5a4-93c07a04e207\") " pod="openshift-must-gather-fj7sq/crc-debug-5ht2q" Feb 19 11:15:31 crc kubenswrapper[4811]: I0219 11:15:31.696614 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32ba2013-9e49-4542-a5a4-93c07a04e207-host\") pod \"crc-debug-5ht2q\" (UID: \"32ba2013-9e49-4542-a5a4-93c07a04e207\") " pod="openshift-must-gather-fj7sq/crc-debug-5ht2q" Feb 19 11:15:31 crc kubenswrapper[4811]: I0219 11:15:31.799271 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32ba2013-9e49-4542-a5a4-93c07a04e207-host\") pod \"crc-debug-5ht2q\" (UID: \"32ba2013-9e49-4542-a5a4-93c07a04e207\") " pod="openshift-must-gather-fj7sq/crc-debug-5ht2q" Feb 19 11:15:31 crc kubenswrapper[4811]: I0219 11:15:31.799433 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32ba2013-9e49-4542-a5a4-93c07a04e207-host\") pod \"crc-debug-5ht2q\" (UID: \"32ba2013-9e49-4542-a5a4-93c07a04e207\") " pod="openshift-must-gather-fj7sq/crc-debug-5ht2q" Feb 19 11:15:31 crc kubenswrapper[4811]: I0219 11:15:31.799852 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgm9v\" (UniqueName: \"kubernetes.io/projected/32ba2013-9e49-4542-a5a4-93c07a04e207-kube-api-access-hgm9v\") pod \"crc-debug-5ht2q\" (UID: \"32ba2013-9e49-4542-a5a4-93c07a04e207\") " pod="openshift-must-gather-fj7sq/crc-debug-5ht2q" Feb 19 11:15:31 crc kubenswrapper[4811]: I0219 11:15:31.819412 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgm9v\" (UniqueName: \"kubernetes.io/projected/32ba2013-9e49-4542-a5a4-93c07a04e207-kube-api-access-hgm9v\") pod \"crc-debug-5ht2q\" (UID: \"32ba2013-9e49-4542-a5a4-93c07a04e207\") " pod="openshift-must-gather-fj7sq/crc-debug-5ht2q" Feb 19 11:15:31 crc kubenswrapper[4811]: I0219 11:15:31.909062 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fj7sq/crc-debug-5ht2q" Feb 19 11:15:32 crc kubenswrapper[4811]: I0219 11:15:32.784217 4811 generic.go:334] "Generic (PLEG): container finished" podID="32ba2013-9e49-4542-a5a4-93c07a04e207" containerID="3bebb77c5ca63c908613f179736008b5c1251d5f762fd535b6e87dd0f938b7f3" exitCode=0 Feb 19 11:15:32 crc kubenswrapper[4811]: I0219 11:15:32.784311 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fj7sq/crc-debug-5ht2q" event={"ID":"32ba2013-9e49-4542-a5a4-93c07a04e207","Type":"ContainerDied","Data":"3bebb77c5ca63c908613f179736008b5c1251d5f762fd535b6e87dd0f938b7f3"} Feb 19 11:15:32 crc kubenswrapper[4811]: I0219 11:15:32.784728 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fj7sq/crc-debug-5ht2q" event={"ID":"32ba2013-9e49-4542-a5a4-93c07a04e207","Type":"ContainerStarted","Data":"a10a0a8329683a11932b7915a93aa9f2881092228dfb570e80e6f18380d01c2e"} Feb 19 11:15:33 crc kubenswrapper[4811]: I0219 11:15:33.200235 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-fj7sq/crc-debug-5ht2q"] Feb 19 11:15:33 crc kubenswrapper[4811]: I0219 11:15:33.211723 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-fj7sq/crc-debug-5ht2q"] Feb 19 11:15:33 crc kubenswrapper[4811]: I0219 11:15:33.903429 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fj7sq/crc-debug-5ht2q" Feb 19 11:15:34 crc kubenswrapper[4811]: I0219 11:15:34.062165 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32ba2013-9e49-4542-a5a4-93c07a04e207-host\") pod \"32ba2013-9e49-4542-a5a4-93c07a04e207\" (UID: \"32ba2013-9e49-4542-a5a4-93c07a04e207\") " Feb 19 11:15:34 crc kubenswrapper[4811]: I0219 11:15:34.062257 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgm9v\" (UniqueName: \"kubernetes.io/projected/32ba2013-9e49-4542-a5a4-93c07a04e207-kube-api-access-hgm9v\") pod \"32ba2013-9e49-4542-a5a4-93c07a04e207\" (UID: \"32ba2013-9e49-4542-a5a4-93c07a04e207\") " Feb 19 11:15:34 crc kubenswrapper[4811]: I0219 11:15:34.062320 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32ba2013-9e49-4542-a5a4-93c07a04e207-host" (OuterVolumeSpecName: "host") pod "32ba2013-9e49-4542-a5a4-93c07a04e207" (UID: "32ba2013-9e49-4542-a5a4-93c07a04e207"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 11:15:34 crc kubenswrapper[4811]: I0219 11:15:34.062964 4811 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32ba2013-9e49-4542-a5a4-93c07a04e207-host\") on node \"crc\" DevicePath \"\"" Feb 19 11:15:34 crc kubenswrapper[4811]: I0219 11:15:34.068661 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32ba2013-9e49-4542-a5a4-93c07a04e207-kube-api-access-hgm9v" (OuterVolumeSpecName: "kube-api-access-hgm9v") pod "32ba2013-9e49-4542-a5a4-93c07a04e207" (UID: "32ba2013-9e49-4542-a5a4-93c07a04e207"). InnerVolumeSpecName "kube-api-access-hgm9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:15:34 crc kubenswrapper[4811]: I0219 11:15:34.165841 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgm9v\" (UniqueName: \"kubernetes.io/projected/32ba2013-9e49-4542-a5a4-93c07a04e207-kube-api-access-hgm9v\") on node \"crc\" DevicePath \"\"" Feb 19 11:15:34 crc kubenswrapper[4811]: I0219 11:15:34.475369 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fj7sq/crc-debug-88bsj"] Feb 19 11:15:34 crc kubenswrapper[4811]: E0219 11:15:34.475917 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32ba2013-9e49-4542-a5a4-93c07a04e207" containerName="container-00" Feb 19 11:15:34 crc kubenswrapper[4811]: I0219 11:15:34.475939 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="32ba2013-9e49-4542-a5a4-93c07a04e207" containerName="container-00" Feb 19 11:15:34 crc kubenswrapper[4811]: I0219 11:15:34.476143 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="32ba2013-9e49-4542-a5a4-93c07a04e207" containerName="container-00" Feb 19 11:15:34 crc kubenswrapper[4811]: I0219 11:15:34.476836 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fj7sq/crc-debug-88bsj" Feb 19 11:15:34 crc kubenswrapper[4811]: I0219 11:15:34.573397 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7kb6\" (UniqueName: \"kubernetes.io/projected/070b948b-c24d-41ab-af55-544f5fe8e730-kube-api-access-k7kb6\") pod \"crc-debug-88bsj\" (UID: \"070b948b-c24d-41ab-af55-544f5fe8e730\") " pod="openshift-must-gather-fj7sq/crc-debug-88bsj" Feb 19 11:15:34 crc kubenswrapper[4811]: I0219 11:15:34.577649 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/070b948b-c24d-41ab-af55-544f5fe8e730-host\") pod \"crc-debug-88bsj\" (UID: \"070b948b-c24d-41ab-af55-544f5fe8e730\") " pod="openshift-must-gather-fj7sq/crc-debug-88bsj" Feb 19 11:15:34 crc kubenswrapper[4811]: I0219 11:15:34.596849 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32ba2013-9e49-4542-a5a4-93c07a04e207" path="/var/lib/kubelet/pods/32ba2013-9e49-4542-a5a4-93c07a04e207/volumes" Feb 19 11:15:34 crc kubenswrapper[4811]: I0219 11:15:34.680036 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7kb6\" (UniqueName: \"kubernetes.io/projected/070b948b-c24d-41ab-af55-544f5fe8e730-kube-api-access-k7kb6\") pod \"crc-debug-88bsj\" (UID: \"070b948b-c24d-41ab-af55-544f5fe8e730\") " pod="openshift-must-gather-fj7sq/crc-debug-88bsj" Feb 19 11:15:34 crc kubenswrapper[4811]: I0219 11:15:34.680178 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/070b948b-c24d-41ab-af55-544f5fe8e730-host\") pod \"crc-debug-88bsj\" (UID: \"070b948b-c24d-41ab-af55-544f5fe8e730\") " pod="openshift-must-gather-fj7sq/crc-debug-88bsj" Feb 19 11:15:34 crc kubenswrapper[4811]: I0219 11:15:34.680377 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/070b948b-c24d-41ab-af55-544f5fe8e730-host\") pod \"crc-debug-88bsj\" (UID: \"070b948b-c24d-41ab-af55-544f5fe8e730\") " pod="openshift-must-gather-fj7sq/crc-debug-88bsj" Feb 19 11:15:34 crc kubenswrapper[4811]: I0219 11:15:34.703136 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7kb6\" (UniqueName: \"kubernetes.io/projected/070b948b-c24d-41ab-af55-544f5fe8e730-kube-api-access-k7kb6\") pod \"crc-debug-88bsj\" (UID: \"070b948b-c24d-41ab-af55-544f5fe8e730\") " pod="openshift-must-gather-fj7sq/crc-debug-88bsj" Feb 19 11:15:34 crc kubenswrapper[4811]: I0219 11:15:34.798913 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fj7sq/crc-debug-88bsj" Feb 19 11:15:34 crc kubenswrapper[4811]: I0219 11:15:34.807340 4811 scope.go:117] "RemoveContainer" containerID="3bebb77c5ca63c908613f179736008b5c1251d5f762fd535b6e87dd0f938b7f3" Feb 19 11:15:34 crc kubenswrapper[4811]: I0219 11:15:34.807511 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fj7sq/crc-debug-5ht2q" Feb 19 11:15:35 crc kubenswrapper[4811]: I0219 11:15:35.819565 4811 generic.go:334] "Generic (PLEG): container finished" podID="070b948b-c24d-41ab-af55-544f5fe8e730" containerID="2e08ad14474989def5a4fe940a8471d1cc93f4d4cff284cc7d640ff304481f32" exitCode=0 Feb 19 11:15:35 crc kubenswrapper[4811]: I0219 11:15:35.819731 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fj7sq/crc-debug-88bsj" event={"ID":"070b948b-c24d-41ab-af55-544f5fe8e730","Type":"ContainerDied","Data":"2e08ad14474989def5a4fe940a8471d1cc93f4d4cff284cc7d640ff304481f32"} Feb 19 11:15:35 crc kubenswrapper[4811]: I0219 11:15:35.820269 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fj7sq/crc-debug-88bsj" event={"ID":"070b948b-c24d-41ab-af55-544f5fe8e730","Type":"ContainerStarted","Data":"dc1fdca4449d161648dfd62c3aa42e9dd8a07c1a7fe3e84dcb882f52fc79b7a0"} Feb 19 11:15:35 crc kubenswrapper[4811]: I0219 11:15:35.874708 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-fj7sq/crc-debug-88bsj"] Feb 19 11:15:35 crc kubenswrapper[4811]: I0219 11:15:35.889292 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-fj7sq/crc-debug-88bsj"] Feb 19 11:15:36 crc kubenswrapper[4811]: I0219 11:15:36.944373 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fj7sq/crc-debug-88bsj" Feb 19 11:15:37 crc kubenswrapper[4811]: I0219 11:15:37.036827 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/070b948b-c24d-41ab-af55-544f5fe8e730-host\") pod \"070b948b-c24d-41ab-af55-544f5fe8e730\" (UID: \"070b948b-c24d-41ab-af55-544f5fe8e730\") " Feb 19 11:15:37 crc kubenswrapper[4811]: I0219 11:15:37.037251 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/070b948b-c24d-41ab-af55-544f5fe8e730-host" (OuterVolumeSpecName: "host") pod "070b948b-c24d-41ab-af55-544f5fe8e730" (UID: "070b948b-c24d-41ab-af55-544f5fe8e730"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 11:15:37 crc kubenswrapper[4811]: I0219 11:15:37.037285 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7kb6\" (UniqueName: \"kubernetes.io/projected/070b948b-c24d-41ab-af55-544f5fe8e730-kube-api-access-k7kb6\") pod \"070b948b-c24d-41ab-af55-544f5fe8e730\" (UID: \"070b948b-c24d-41ab-af55-544f5fe8e730\") " Feb 19 11:15:37 crc kubenswrapper[4811]: I0219 11:15:37.040023 4811 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/070b948b-c24d-41ab-af55-544f5fe8e730-host\") on node \"crc\" DevicePath \"\"" Feb 19 11:15:37 crc kubenswrapper[4811]: I0219 11:15:37.084380 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/070b948b-c24d-41ab-af55-544f5fe8e730-kube-api-access-k7kb6" (OuterVolumeSpecName: "kube-api-access-k7kb6") pod "070b948b-c24d-41ab-af55-544f5fe8e730" (UID: "070b948b-c24d-41ab-af55-544f5fe8e730"). InnerVolumeSpecName "kube-api-access-k7kb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:15:37 crc kubenswrapper[4811]: I0219 11:15:37.142180 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7kb6\" (UniqueName: \"kubernetes.io/projected/070b948b-c24d-41ab-af55-544f5fe8e730-kube-api-access-k7kb6\") on node \"crc\" DevicePath \"\"" Feb 19 11:15:37 crc kubenswrapper[4811]: I0219 11:15:37.843299 4811 scope.go:117] "RemoveContainer" containerID="2e08ad14474989def5a4fe940a8471d1cc93f4d4cff284cc7d640ff304481f32" Feb 19 11:15:37 crc kubenswrapper[4811]: I0219 11:15:37.843310 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fj7sq/crc-debug-88bsj" Feb 19 11:15:38 crc kubenswrapper[4811]: I0219 11:15:38.588199 4811 scope.go:117] "RemoveContainer" containerID="133c8091f936fab48e586b7bdce009236e4848e9a33aa81fe2541aee87912496" Feb 19 11:15:38 crc kubenswrapper[4811]: E0219 11:15:38.588995 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:15:38 crc kubenswrapper[4811]: I0219 11:15:38.595936 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="070b948b-c24d-41ab-af55-544f5fe8e730" path="/var/lib/kubelet/pods/070b948b-c24d-41ab-af55-544f5fe8e730/volumes" Feb 19 11:15:49 crc kubenswrapper[4811]: I0219 11:15:49.584652 4811 scope.go:117] "RemoveContainer" containerID="133c8091f936fab48e586b7bdce009236e4848e9a33aa81fe2541aee87912496" Feb 19 11:15:49 crc kubenswrapper[4811]: E0219 11:15:49.586329 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:16:01 crc kubenswrapper[4811]: I0219 11:16:01.584226 4811 scope.go:117] "RemoveContainer" containerID="133c8091f936fab48e586b7bdce009236e4848e9a33aa81fe2541aee87912496" Feb 19 11:16:01 crc kubenswrapper[4811]: E0219 11:16:01.585264 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:16:12 crc kubenswrapper[4811]: I0219 11:16:12.696238 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6566bc4c4b-lsrcb_39389157-f52c-4caa-85a5-e2306e903180/barbican-api/0.log" Feb 19 11:16:12 crc kubenswrapper[4811]: I0219 11:16:12.784985 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6566bc4c4b-lsrcb_39389157-f52c-4caa-85a5-e2306e903180/barbican-api-log/0.log" Feb 19 11:16:12 crc kubenswrapper[4811]: I0219 11:16:12.905139 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-594c4b4cb6-89gd8_51d8831e-3334-4e4a-84b1-98912b1dc6f2/barbican-keystone-listener/0.log" Feb 19 11:16:12 crc kubenswrapper[4811]: I0219 11:16:12.984762 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-594c4b4cb6-89gd8_51d8831e-3334-4e4a-84b1-98912b1dc6f2/barbican-keystone-listener-log/0.log" Feb 19 11:16:13 crc kubenswrapper[4811]: I0219 11:16:13.094785 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7cc994bfc5-bf96r_20451628-f7b0-43b3-b0b9-4ab828db8a77/barbican-worker/0.log" Feb 19 11:16:13 crc kubenswrapper[4811]: I0219 11:16:13.132560 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7cc994bfc5-bf96r_20451628-f7b0-43b3-b0b9-4ab828db8a77/barbican-worker-log/0.log" Feb 19 11:16:13 crc kubenswrapper[4811]: I0219 11:16:13.231224 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-97bxz_7ce25cb1-51ce-4643-9fac-e7fa4b1afce9/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:16:13 crc kubenswrapper[4811]: I0219 11:16:13.349499 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cd0891f5-d119-48e2-8e7e-1ddfa51417a9/ceilometer-central-agent/0.log" Feb 19 11:16:13 crc kubenswrapper[4811]: I0219 11:16:13.408319 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cd0891f5-d119-48e2-8e7e-1ddfa51417a9/ceilometer-notification-agent/0.log" Feb 19 11:16:14 crc kubenswrapper[4811]: I0219 11:16:14.106385 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cd0891f5-d119-48e2-8e7e-1ddfa51417a9/sg-core/0.log" Feb 19 11:16:14 crc kubenswrapper[4811]: I0219 11:16:14.113687 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cd0891f5-d119-48e2-8e7e-1ddfa51417a9/proxy-httpd/0.log" Feb 19 11:16:14 crc kubenswrapper[4811]: I0219 11:16:14.158097 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_ebb19588-7922-460c-9368-05aaf6706b52/cinder-api/0.log" Feb 19 11:16:14 crc kubenswrapper[4811]: I0219 11:16:14.301234 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_ebb19588-7922-460c-9368-05aaf6706b52/cinder-api-log/0.log" Feb 19 11:16:14 crc kubenswrapper[4811]: I0219 11:16:14.439888 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed/probe/0.log" Feb 19 11:16:14 crc kubenswrapper[4811]: I0219 11:16:14.467769 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_1fb1d9a8-09df-4b01-bb55-ab0a032cd0ed/cinder-scheduler/0.log" Feb 19 11:16:14 crc kubenswrapper[4811]: I0219 11:16:14.608163 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-88tdb_ddf9ccc5-ba24-4253-93d5-1c2024978965/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:16:14 crc kubenswrapper[4811]: I0219 11:16:14.731510 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-s6crt_0409deb0-2253-48f7-8f17-3bf3c70088ef/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:16:14 crc kubenswrapper[4811]: I0219 11:16:14.819845 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-xm8bw_deca690b-51f2-4ee0-adbb-d7dfaa165fcc/init/0.log" Feb 19 11:16:15 crc kubenswrapper[4811]: I0219 11:16:15.077850 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-xm8bw_deca690b-51f2-4ee0-adbb-d7dfaa165fcc/init/0.log" Feb 19 11:16:15 crc kubenswrapper[4811]: I0219 11:16:15.163823 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-xm8bw_deca690b-51f2-4ee0-adbb-d7dfaa165fcc/dnsmasq-dns/0.log" Feb 19 11:16:15 crc kubenswrapper[4811]: I0219 11:16:15.180986 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-qwdvx_57a5bec4-f1ad-45e6-9045-dea45abd7ccf/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:16:15 crc kubenswrapper[4811]: I0219 11:16:15.584275 4811 scope.go:117] "RemoveContainer" containerID="133c8091f936fab48e586b7bdce009236e4848e9a33aa81fe2541aee87912496" Feb 19 11:16:15 crc kubenswrapper[4811]: E0219 11:16:15.584938 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:16:15 crc kubenswrapper[4811]: I0219 11:16:15.731370 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_5a784552-f4b7-42d3-a0b7-d223fbe3818c/glance-httpd/0.log" Feb 19 11:16:15 crc kubenswrapper[4811]: I0219 11:16:15.751316 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_5a784552-f4b7-42d3-a0b7-d223fbe3818c/glance-log/0.log" Feb 19 11:16:15 crc kubenswrapper[4811]: I0219 11:16:15.912356 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7/glance-log/0.log" Feb 19 11:16:15 crc kubenswrapper[4811]: I0219 11:16:15.933375 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_fb7714a4-fcc1-4eff-96d8-0e9eaa96bde7/glance-httpd/0.log" Feb 19 11:16:16 crc kubenswrapper[4811]: I0219 11:16:16.114107 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-676dcb6496-tnhmt_19d17501-8d5f-4f9d-8fad-94f48224e910/horizon/1.log" Feb 19 11:16:16 crc kubenswrapper[4811]: I0219 11:16:16.302489 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-676dcb6496-tnhmt_19d17501-8d5f-4f9d-8fad-94f48224e910/horizon/0.log" Feb 19 11:16:16 crc kubenswrapper[4811]: I0219 11:16:16.354396 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-vjbd8_2d303b5a-179e-407e-8dae-2eab291880d5/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:16:16 crc kubenswrapper[4811]: I0219 11:16:16.526953 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-kg62s_4ed42933-309e-4fe0-874b-7ec5fb67d807/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:16:16 crc kubenswrapper[4811]: I0219 11:16:16.561005 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-676dcb6496-tnhmt_19d17501-8d5f-4f9d-8fad-94f48224e910/horizon-log/0.log" Feb 19 11:16:16 crc kubenswrapper[4811]: I0219 11:16:16.762113 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-54bd9d4f8c-p9wzd_88141b63-6cd8-4294-a7dc-a2104eb13eb4/keystone-api/0.log" Feb 19 11:16:16 crc kubenswrapper[4811]: I0219 11:16:16.765176 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29524981-j4dnl_58f1765b-3836-4257-aa59-d47c7464c1e5/keystone-cron/0.log" Feb 19 11:16:16 crc kubenswrapper[4811]: I0219 11:16:16.796218 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_5f41cd68-0e1f-48df-a830-1c581d0cfe9e/kube-state-metrics/0.log" Feb 19 11:16:16 crc kubenswrapper[4811]: I0219 11:16:16.971332 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-flfd2_177810e6-4482-4c82-ac9a-721e76c63579/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:16:17 crc kubenswrapper[4811]: I0219 11:16:17.306890 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-599475cf85-sh7k6_70197607-b202-4d15-a797-ef5d138c5502/neutron-httpd/0.log" Feb 19 11:16:17 crc kubenswrapper[4811]: I0219 11:16:17.377796 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-9qbvz_a207452e-4fab-4a38-be85-31bc84e8d6fc/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:16:17 crc kubenswrapper[4811]: I0219 11:16:17.429738 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-599475cf85-sh7k6_70197607-b202-4d15-a797-ef5d138c5502/neutron-api/0.log" Feb 19 11:16:17 crc kubenswrapper[4811]: I0219 11:16:17.953906 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2746bf91-50ba-4dc9-a6cf-1f7117f48ee2/nova-api-log/0.log" Feb 19 11:16:18 crc kubenswrapper[4811]: I0219 11:16:18.032242 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_b92a06d2-fab4-462e-ae54-2dd7087ad94a/nova-cell0-conductor-conductor/0.log" Feb 19 11:16:18 crc kubenswrapper[4811]: I0219 11:16:18.378052 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_10c31d4c-29cf-4dec-8cf1-98f5f1e804a8/nova-cell1-conductor-conductor/0.log" Feb 19 11:16:18 crc kubenswrapper[4811]: I0219 11:16:18.435603 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_66690abb-6530-48ba-ab47-36369217c473/nova-cell1-novncproxy-novncproxy/0.log" Feb 19 11:16:18 crc kubenswrapper[4811]: I0219 11:16:18.472470 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2746bf91-50ba-4dc9-a6cf-1f7117f48ee2/nova-api-api/0.log" Feb 19 11:16:18 crc kubenswrapper[4811]: I0219 11:16:18.615056 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-6djpr_3aebcdfb-c20a-447c-9b30-fa8f749bbf90/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:16:18 crc kubenswrapper[4811]: I0219 11:16:18.726998 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2e32654e-c645-4e60-9879-73ed179fa105/nova-metadata-log/0.log" Feb 19 11:16:19 crc kubenswrapper[4811]: I0219 11:16:19.148383 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_354e40bc-0a2b-4654-873c-30b6293d6192/mysql-bootstrap/0.log" Feb 19 11:16:19 crc kubenswrapper[4811]: I0219 11:16:19.159523 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_362f5b81-de68-4bce-824c-734e9d18f1f1/nova-scheduler-scheduler/0.log" Feb 19 11:16:19 crc kubenswrapper[4811]: I0219 11:16:19.308038 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_354e40bc-0a2b-4654-873c-30b6293d6192/mysql-bootstrap/0.log" Feb 19 11:16:19 crc kubenswrapper[4811]: I0219 11:16:19.382360 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_354e40bc-0a2b-4654-873c-30b6293d6192/galera/0.log" Feb 19 11:16:19 crc kubenswrapper[4811]: I0219 11:16:19.553810 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6/mysql-bootstrap/0.log" Feb 19 11:16:19 crc kubenswrapper[4811]: I0219 11:16:19.748462 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6/galera/0.log" Feb 19 11:16:19 crc kubenswrapper[4811]: I0219 11:16:19.753533 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ce72707a-ccd4-4e7f-bec1-9e79d1fd95c6/mysql-bootstrap/0.log" Feb 19 11:16:19 crc kubenswrapper[4811]: I0219 11:16:19.967789 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_c6494cd0-c929-4278-974b-0839193092f1/openstackclient/0.log" Feb 19 11:16:20 crc kubenswrapper[4811]: I0219 11:16:20.028063 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-qwhjn_764be2bf-8d0b-4783-afcd-de76bac7ce34/openstack-network-exporter/0.log" Feb 19 11:16:20 crc kubenswrapper[4811]: I0219 11:16:20.214878 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-mvhxm_00337a1f-39d0-4583-b35b-a50aeb572f60/ovn-controller/0.log" Feb 19 11:16:20 crc kubenswrapper[4811]: I0219 11:16:20.253882 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2e32654e-c645-4e60-9879-73ed179fa105/nova-metadata-metadata/0.log" Feb 19 11:16:20 crc kubenswrapper[4811]: I0219 11:16:20.416812 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bddvk_a76ec0f0-9b42-4e48-bd3f-68a74161cc4c/ovsdb-server-init/0.log" Feb 19 11:16:20 crc kubenswrapper[4811]: I0219 11:16:20.650128 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bddvk_a76ec0f0-9b42-4e48-bd3f-68a74161cc4c/ovs-vswitchd/0.log" Feb 19 11:16:20 crc kubenswrapper[4811]: I0219 11:16:20.655115 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bddvk_a76ec0f0-9b42-4e48-bd3f-68a74161cc4c/ovsdb-server-init/0.log" Feb 19 11:16:20 crc kubenswrapper[4811]: I0219 11:16:20.702576 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bddvk_a76ec0f0-9b42-4e48-bd3f-68a74161cc4c/ovsdb-server/0.log" Feb 19 11:16:20 crc kubenswrapper[4811]: I0219 11:16:20.928708 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-m9spn_f9579646-1f18-4b3b-af75-9c5af0107c79/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:16:20 crc kubenswrapper[4811]: I0219 11:16:20.970179 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_195bcb34-7bf7-45d0-92d4-0fb83788222c/ovn-northd/0.log" Feb 19 11:16:20 crc kubenswrapper[4811]: I0219 11:16:20.998388 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_195bcb34-7bf7-45d0-92d4-0fb83788222c/openstack-network-exporter/0.log" Feb 19 11:16:21 crc kubenswrapper[4811]: I0219 11:16:21.192056 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2e3fed93-4b85-4e79-a9d3-795dbd7695ba/openstack-network-exporter/0.log" Feb 19 11:16:21 crc kubenswrapper[4811]: I0219 11:16:21.253913 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2e3fed93-4b85-4e79-a9d3-795dbd7695ba/ovsdbserver-nb/0.log" Feb 19 11:16:21 crc kubenswrapper[4811]: I0219 11:16:21.413630 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1a9b5380-d587-4d65-8ddb-b6c2faef624a/openstack-network-exporter/0.log" Feb 19 11:16:21 crc kubenswrapper[4811]: I0219 11:16:21.437181 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1a9b5380-d587-4d65-8ddb-b6c2faef624a/ovsdbserver-sb/0.log" Feb 19 11:16:21 crc kubenswrapper[4811]: I0219 11:16:21.603832 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6df884f584-x4z2k_f7db31a7-4566-4027-81f8-af4496f7dc07/placement-api/0.log" Feb 19 11:16:21 crc kubenswrapper[4811]: I0219 11:16:21.753052 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6df884f584-x4z2k_f7db31a7-4566-4027-81f8-af4496f7dc07/placement-log/0.log" Feb 19 11:16:21 crc kubenswrapper[4811]: I0219 11:16:21.836622 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_44cb8367-2f57-4e24-9d48-b2d22a39d128/setup-container/0.log" Feb 19 11:16:21 crc kubenswrapper[4811]: I0219 11:16:21.992614 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_44cb8367-2f57-4e24-9d48-b2d22a39d128/rabbitmq/0.log" Feb 19 11:16:22 crc kubenswrapper[4811]: I0219 11:16:22.060189 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_44cb8367-2f57-4e24-9d48-b2d22a39d128/setup-container/0.log" Feb 19 11:16:22 crc kubenswrapper[4811]: I0219 11:16:22.087283 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c8683fae-0b62-402d-bec3-5c3be97c8c98/setup-container/0.log" Feb 19 11:16:22 crc kubenswrapper[4811]: I0219 11:16:22.217078 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c8683fae-0b62-402d-bec3-5c3be97c8c98/setup-container/0.log" Feb 19 11:16:22 crc kubenswrapper[4811]: I0219 11:16:22.314382 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-br5xl_6563b2b6-b15d-48ca-9c1f-75f9f17321c9/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:16:22 crc kubenswrapper[4811]: I0219 11:16:22.335075 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c8683fae-0b62-402d-bec3-5c3be97c8c98/rabbitmq/0.log" Feb 19 11:16:22 crc kubenswrapper[4811]: I0219 11:16:22.569791 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-bk8sg_36d956c8-5142-4ec4-b5fb-622c05d0e097/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:16:22 crc kubenswrapper[4811]: I0219 11:16:22.636210 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-5sclt_5b78bd06-9aa5-4755-be4c-ee3d85655501/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:16:22 crc kubenswrapper[4811]: I0219 11:16:22.980489 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-mcrbf_6b88198f-8c56-4112-956d-2361512d5f2e/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:16:23 crc kubenswrapper[4811]: I0219 11:16:23.095365 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-pdvdx_b972495c-f6be-4f80-81e5-ac652cf40950/ssh-known-hosts-edpm-deployment/0.log" Feb 19 11:16:23 crc kubenswrapper[4811]: I0219 11:16:23.365271 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-666bdb77b9-x6mrc_0beeae9e-3b17-4691-8273-95fbc4be85fb/proxy-server/0.log" Feb 19 11:16:23 crc kubenswrapper[4811]: I0219 11:16:23.401226 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-666bdb77b9-x6mrc_0beeae9e-3b17-4691-8273-95fbc4be85fb/proxy-httpd/0.log" Feb 19 11:16:23 crc kubenswrapper[4811]: I0219 11:16:23.482001 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-r67q5_30bb0370-e0d8-4aaf-8a07-35ab4af890c6/swift-ring-rebalance/0.log" Feb 19 11:16:23 crc kubenswrapper[4811]: I0219 11:16:23.640081 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_dce922ee-7833-4b43-9358-72aa7da07a03/account-reaper/0.log" Feb 19 11:16:23 crc kubenswrapper[4811]: I0219 11:16:23.701623 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_dce922ee-7833-4b43-9358-72aa7da07a03/account-auditor/0.log" Feb 19 11:16:23 crc kubenswrapper[4811]: I0219 11:16:23.808402 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_dce922ee-7833-4b43-9358-72aa7da07a03/account-replicator/0.log" Feb 19 11:16:23 crc kubenswrapper[4811]: I0219 11:16:23.843179 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_dce922ee-7833-4b43-9358-72aa7da07a03/account-server/0.log" Feb 19 11:16:23 crc kubenswrapper[4811]: I0219 11:16:23.919829 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_dce922ee-7833-4b43-9358-72aa7da07a03/container-auditor/0.log" Feb 19 11:16:23 crc kubenswrapper[4811]: I0219 11:16:23.969735 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_dce922ee-7833-4b43-9358-72aa7da07a03/container-replicator/0.log" Feb 19 11:16:24 crc kubenswrapper[4811]: I0219 11:16:24.510165 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_dce922ee-7833-4b43-9358-72aa7da07a03/container-updater/0.log" Feb 19 11:16:24 crc kubenswrapper[4811]: I0219 11:16:24.541707 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_dce922ee-7833-4b43-9358-72aa7da07a03/container-server/0.log" Feb 19 11:16:24 crc kubenswrapper[4811]: I0219 11:16:24.575191 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_dce922ee-7833-4b43-9358-72aa7da07a03/object-expirer/0.log" Feb 19 11:16:24 crc kubenswrapper[4811]: I0219 11:16:24.591600 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_dce922ee-7833-4b43-9358-72aa7da07a03/object-auditor/0.log" Feb 19 11:16:24 crc kubenswrapper[4811]: I0219 11:16:24.773011 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_dce922ee-7833-4b43-9358-72aa7da07a03/object-server/0.log" Feb 19 11:16:24 crc kubenswrapper[4811]: I0219 11:16:24.774091 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_dce922ee-7833-4b43-9358-72aa7da07a03/object-updater/0.log" Feb 19 11:16:24 crc kubenswrapper[4811]: I0219 11:16:24.811143 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_dce922ee-7833-4b43-9358-72aa7da07a03/object-replicator/0.log" Feb 19 11:16:24 crc kubenswrapper[4811]: I0219 11:16:24.847988 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_dce922ee-7833-4b43-9358-72aa7da07a03/rsync/0.log" Feb 19 11:16:25 crc kubenswrapper[4811]: I0219 11:16:25.017087 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_dce922ee-7833-4b43-9358-72aa7da07a03/swift-recon-cron/0.log" Feb 19 11:16:25 crc kubenswrapper[4811]: I0219 11:16:25.136502 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-wkdvx_2017b43a-f947-4d28-97b7-5cf3d5d0cc1b/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:16:25 crc kubenswrapper[4811]: I0219 11:16:25.244146 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_4e965f02-705b-4631-879f-27070f0bfa70/tempest-tests-tempest-tests-runner/0.log" Feb 19 11:16:25 crc kubenswrapper[4811]: I0219 11:16:25.413201 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_a0b6d7a4-d66c-4621-9fd1-bb45c0d5860b/test-operator-logs-container/0.log" Feb 19 11:16:25 crc kubenswrapper[4811]: I0219 11:16:25.506259 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-vxwxz_6df41748-d8bb-49c9-aa6f-403827039ae7/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 11:16:27 crc kubenswrapper[4811]: I0219 11:16:27.583036 4811 scope.go:117] "RemoveContainer" containerID="133c8091f936fab48e586b7bdce009236e4848e9a33aa81fe2541aee87912496" Feb 19 11:16:27 crc kubenswrapper[4811]: E0219 11:16:27.583887 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:16:35 crc kubenswrapper[4811]: I0219 11:16:35.660241 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_c2ff968e-6341-43f8-bf15-21009d76f661/memcached/0.log" Feb 19 11:16:40 crc kubenswrapper[4811]: I0219 11:16:40.083522 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fs5sp"] Feb 19 11:16:40 crc kubenswrapper[4811]: E0219 11:16:40.084598 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="070b948b-c24d-41ab-af55-544f5fe8e730" containerName="container-00" Feb 19 11:16:40 crc kubenswrapper[4811]: I0219 11:16:40.084613 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="070b948b-c24d-41ab-af55-544f5fe8e730" containerName="container-00" Feb 19 11:16:40 crc kubenswrapper[4811]: I0219 11:16:40.084850 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="070b948b-c24d-41ab-af55-544f5fe8e730" containerName="container-00" Feb 19 11:16:40 crc kubenswrapper[4811]: I0219 11:16:40.086697 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fs5sp" Feb 19 11:16:40 crc kubenswrapper[4811]: I0219 11:16:40.097864 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fs5sp"] Feb 19 11:16:40 crc kubenswrapper[4811]: I0219 11:16:40.118147 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9tk6\" (UniqueName: \"kubernetes.io/projected/4e7e033c-0702-43ed-93bc-86a7ef2d6e19-kube-api-access-v9tk6\") pod \"redhat-operators-fs5sp\" (UID: \"4e7e033c-0702-43ed-93bc-86a7ef2d6e19\") " pod="openshift-marketplace/redhat-operators-fs5sp" Feb 19 11:16:40 crc kubenswrapper[4811]: I0219 11:16:40.118224 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e7e033c-0702-43ed-93bc-86a7ef2d6e19-catalog-content\") pod \"redhat-operators-fs5sp\" (UID: \"4e7e033c-0702-43ed-93bc-86a7ef2d6e19\") " pod="openshift-marketplace/redhat-operators-fs5sp" Feb 19 11:16:40 crc kubenswrapper[4811]: I0219 11:16:40.118399 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e7e033c-0702-43ed-93bc-86a7ef2d6e19-utilities\") pod \"redhat-operators-fs5sp\" (UID: \"4e7e033c-0702-43ed-93bc-86a7ef2d6e19\") " pod="openshift-marketplace/redhat-operators-fs5sp" Feb 19 11:16:40 crc kubenswrapper[4811]: I0219 11:16:40.219295 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e7e033c-0702-43ed-93bc-86a7ef2d6e19-utilities\") pod \"redhat-operators-fs5sp\" (UID: \"4e7e033c-0702-43ed-93bc-86a7ef2d6e19\") " pod="openshift-marketplace/redhat-operators-fs5sp" Feb 19 11:16:40 crc kubenswrapper[4811]: I0219 11:16:40.219395 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9tk6\" (UniqueName: \"kubernetes.io/projected/4e7e033c-0702-43ed-93bc-86a7ef2d6e19-kube-api-access-v9tk6\") pod \"redhat-operators-fs5sp\" (UID: \"4e7e033c-0702-43ed-93bc-86a7ef2d6e19\") " pod="openshift-marketplace/redhat-operators-fs5sp" Feb 19 11:16:40 crc kubenswrapper[4811]: I0219 11:16:40.219427 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e7e033c-0702-43ed-93bc-86a7ef2d6e19-catalog-content\") pod \"redhat-operators-fs5sp\" (UID: \"4e7e033c-0702-43ed-93bc-86a7ef2d6e19\") " pod="openshift-marketplace/redhat-operators-fs5sp" Feb 19 11:16:40 crc kubenswrapper[4811]: I0219 11:16:40.219984 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e7e033c-0702-43ed-93bc-86a7ef2d6e19-utilities\") pod \"redhat-operators-fs5sp\" (UID: \"4e7e033c-0702-43ed-93bc-86a7ef2d6e19\") " pod="openshift-marketplace/redhat-operators-fs5sp" Feb 19 11:16:40 crc kubenswrapper[4811]: I0219 11:16:40.220031 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e7e033c-0702-43ed-93bc-86a7ef2d6e19-catalog-content\") pod \"redhat-operators-fs5sp\" (UID: \"4e7e033c-0702-43ed-93bc-86a7ef2d6e19\") " pod="openshift-marketplace/redhat-operators-fs5sp" Feb 19 11:16:40 crc kubenswrapper[4811]: I0219 11:16:40.241623 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9tk6\" (UniqueName: \"kubernetes.io/projected/4e7e033c-0702-43ed-93bc-86a7ef2d6e19-kube-api-access-v9tk6\") pod \"redhat-operators-fs5sp\" (UID: \"4e7e033c-0702-43ed-93bc-86a7ef2d6e19\") " pod="openshift-marketplace/redhat-operators-fs5sp" Feb 19 11:16:40 crc kubenswrapper[4811]: I0219 11:16:40.411692 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fs5sp" Feb 19 11:16:40 crc kubenswrapper[4811]: I0219 11:16:40.929780 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fs5sp"] Feb 19 11:16:41 crc kubenswrapper[4811]: I0219 11:16:41.488116 4811 generic.go:334] "Generic (PLEG): container finished" podID="4e7e033c-0702-43ed-93bc-86a7ef2d6e19" containerID="a2f3db4050bc83be7e46f24aeff996f0994e6c3bb1faf9ea007f351831aef4c9" exitCode=0 Feb 19 11:16:41 crc kubenswrapper[4811]: I0219 11:16:41.488204 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fs5sp" event={"ID":"4e7e033c-0702-43ed-93bc-86a7ef2d6e19","Type":"ContainerDied","Data":"a2f3db4050bc83be7e46f24aeff996f0994e6c3bb1faf9ea007f351831aef4c9"} Feb 19 11:16:41 crc kubenswrapper[4811]: I0219 11:16:41.488278 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fs5sp" event={"ID":"4e7e033c-0702-43ed-93bc-86a7ef2d6e19","Type":"ContainerStarted","Data":"9e7797f30cb2df76f91aa41c1fde613d542b0e9037b7fbc09f87102ee7a38bd2"} Feb 19 11:16:42 crc kubenswrapper[4811]: I0219 11:16:42.589695 4811 scope.go:117] "RemoveContainer" containerID="133c8091f936fab48e586b7bdce009236e4848e9a33aa81fe2541aee87912496" Feb 19 11:16:42 crc kubenswrapper[4811]: E0219 11:16:42.590756 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:16:43 crc kubenswrapper[4811]: I0219 11:16:43.510953 4811 generic.go:334] "Generic (PLEG): container finished" podID="4e7e033c-0702-43ed-93bc-86a7ef2d6e19" containerID="3cdecbea43ca125b37cedbb72b1275d599974e3573d169c94109c36336ef6932" exitCode=0 Feb 19 11:16:43 crc kubenswrapper[4811]: I0219 11:16:43.511022 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fs5sp" event={"ID":"4e7e033c-0702-43ed-93bc-86a7ef2d6e19","Type":"ContainerDied","Data":"3cdecbea43ca125b37cedbb72b1275d599974e3573d169c94109c36336ef6932"} Feb 19 11:16:44 crc kubenswrapper[4811]: I0219 11:16:44.523255 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fs5sp" event={"ID":"4e7e033c-0702-43ed-93bc-86a7ef2d6e19","Type":"ContainerStarted","Data":"2e483b3e99f7c65997b11e96818d20b059e6e2f00196ccfb0e696bd79ea659a2"} Feb 19 11:16:44 crc kubenswrapper[4811]: I0219 11:16:44.546041 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fs5sp" podStartSLOduration=1.9705527520000001 podStartE2EDuration="4.546011261s" podCreationTimestamp="2026-02-19 11:16:40 +0000 UTC" firstStartedPulling="2026-02-19 11:16:41.490620156 +0000 UTC m=+4090.017084842" lastFinishedPulling="2026-02-19 11:16:44.066078625 +0000 UTC m=+4092.592543351" observedRunningTime="2026-02-19 11:16:44.54202275 +0000 UTC m=+4093.068487456" watchObservedRunningTime="2026-02-19 11:16:44.546011261 +0000 UTC m=+4093.072475947" Feb 19 11:16:50 crc kubenswrapper[4811]: I0219 11:16:50.412366 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fs5sp" Feb 19 11:16:50 crc kubenswrapper[4811]: I0219 11:16:50.413005 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fs5sp" Feb 19 11:16:50 crc kubenswrapper[4811]: I0219 11:16:50.462314 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fs5sp" Feb 19 11:16:50 crc kubenswrapper[4811]: I0219 11:16:50.642086 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fs5sp" Feb 19 11:16:50 crc kubenswrapper[4811]: I0219 11:16:50.718411 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fs5sp"] Feb 19 11:16:52 crc kubenswrapper[4811]: I0219 11:16:52.605646 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fs5sp" podUID="4e7e033c-0702-43ed-93bc-86a7ef2d6e19" containerName="registry-server" containerID="cri-o://2e483b3e99f7c65997b11e96818d20b059e6e2f00196ccfb0e696bd79ea659a2" gracePeriod=2 Feb 19 11:16:53 crc kubenswrapper[4811]: I0219 11:16:53.124648 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pxmxb"] Feb 19 11:16:53 crc kubenswrapper[4811]: I0219 11:16:53.126772 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pxmxb" Feb 19 11:16:53 crc kubenswrapper[4811]: I0219 11:16:53.152119 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fs5sp" Feb 19 11:16:53 crc kubenswrapper[4811]: I0219 11:16:53.169530 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pxmxb"] Feb 19 11:16:53 crc kubenswrapper[4811]: I0219 11:16:53.202284 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9tk6\" (UniqueName: \"kubernetes.io/projected/4e7e033c-0702-43ed-93bc-86a7ef2d6e19-kube-api-access-v9tk6\") pod \"4e7e033c-0702-43ed-93bc-86a7ef2d6e19\" (UID: \"4e7e033c-0702-43ed-93bc-86a7ef2d6e19\") " Feb 19 11:16:53 crc kubenswrapper[4811]: I0219 11:16:53.202377 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e7e033c-0702-43ed-93bc-86a7ef2d6e19-utilities\") pod \"4e7e033c-0702-43ed-93bc-86a7ef2d6e19\" (UID: \"4e7e033c-0702-43ed-93bc-86a7ef2d6e19\") " Feb 19 11:16:53 crc kubenswrapper[4811]: I0219 11:16:53.202520 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e7e033c-0702-43ed-93bc-86a7ef2d6e19-catalog-content\") pod \"4e7e033c-0702-43ed-93bc-86a7ef2d6e19\" (UID: \"4e7e033c-0702-43ed-93bc-86a7ef2d6e19\") " Feb 19 11:16:53 crc kubenswrapper[4811]: I0219 11:16:53.203395 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ecfa1fe-2338-4679-96d8-04438f1809b0-catalog-content\") pod \"certified-operators-pxmxb\" (UID: \"4ecfa1fe-2338-4679-96d8-04438f1809b0\") " pod="openshift-marketplace/certified-operators-pxmxb" Feb 19 11:16:53 crc kubenswrapper[4811]: I0219 11:16:53.203472 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x9n9\" (UniqueName: \"kubernetes.io/projected/4ecfa1fe-2338-4679-96d8-04438f1809b0-kube-api-access-4x9n9\") pod \"certified-operators-pxmxb\" (UID: \"4ecfa1fe-2338-4679-96d8-04438f1809b0\") " pod="openshift-marketplace/certified-operators-pxmxb" Feb 19 11:16:53 crc kubenswrapper[4811]: I0219 11:16:53.203531 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ecfa1fe-2338-4679-96d8-04438f1809b0-utilities\") pod \"certified-operators-pxmxb\" (UID: \"4ecfa1fe-2338-4679-96d8-04438f1809b0\") " pod="openshift-marketplace/certified-operators-pxmxb" Feb 19 11:16:53 crc kubenswrapper[4811]: I0219 11:16:53.204136 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e7e033c-0702-43ed-93bc-86a7ef2d6e19-utilities" (OuterVolumeSpecName: "utilities") pod "4e7e033c-0702-43ed-93bc-86a7ef2d6e19" (UID: "4e7e033c-0702-43ed-93bc-86a7ef2d6e19"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:16:53 crc kubenswrapper[4811]: I0219 11:16:53.213785 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e7e033c-0702-43ed-93bc-86a7ef2d6e19-kube-api-access-v9tk6" (OuterVolumeSpecName: "kube-api-access-v9tk6") pod "4e7e033c-0702-43ed-93bc-86a7ef2d6e19" (UID: "4e7e033c-0702-43ed-93bc-86a7ef2d6e19"). InnerVolumeSpecName "kube-api-access-v9tk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:16:53 crc kubenswrapper[4811]: I0219 11:16:53.304551 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ecfa1fe-2338-4679-96d8-04438f1809b0-catalog-content\") pod \"certified-operators-pxmxb\" (UID: \"4ecfa1fe-2338-4679-96d8-04438f1809b0\") " pod="openshift-marketplace/certified-operators-pxmxb" Feb 19 11:16:53 crc kubenswrapper[4811]: I0219 11:16:53.304640 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x9n9\" (UniqueName: \"kubernetes.io/projected/4ecfa1fe-2338-4679-96d8-04438f1809b0-kube-api-access-4x9n9\") pod \"certified-operators-pxmxb\" (UID: \"4ecfa1fe-2338-4679-96d8-04438f1809b0\") " pod="openshift-marketplace/certified-operators-pxmxb" Feb 19 11:16:53 crc kubenswrapper[4811]: I0219 11:16:53.304687 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ecfa1fe-2338-4679-96d8-04438f1809b0-utilities\") pod \"certified-operators-pxmxb\" (UID: \"4ecfa1fe-2338-4679-96d8-04438f1809b0\") " pod="openshift-marketplace/certified-operators-pxmxb" Feb 19 11:16:53 crc kubenswrapper[4811]: I0219 11:16:53.304841 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9tk6\" (UniqueName: \"kubernetes.io/projected/4e7e033c-0702-43ed-93bc-86a7ef2d6e19-kube-api-access-v9tk6\") on node \"crc\" DevicePath \"\"" Feb 19 11:16:53 crc kubenswrapper[4811]: I0219 11:16:53.304859 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e7e033c-0702-43ed-93bc-86a7ef2d6e19-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 11:16:53 crc kubenswrapper[4811]: I0219 11:16:53.305248 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ecfa1fe-2338-4679-96d8-04438f1809b0-catalog-content\") pod \"certified-operators-pxmxb\" (UID: \"4ecfa1fe-2338-4679-96d8-04438f1809b0\") " pod="openshift-marketplace/certified-operators-pxmxb" Feb 19 11:16:53 crc kubenswrapper[4811]: I0219 11:16:53.305344 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ecfa1fe-2338-4679-96d8-04438f1809b0-utilities\") pod \"certified-operators-pxmxb\" (UID: \"4ecfa1fe-2338-4679-96d8-04438f1809b0\") " pod="openshift-marketplace/certified-operators-pxmxb" Feb 19 11:16:53 crc kubenswrapper[4811]: I0219 11:16:53.334536 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x9n9\" (UniqueName: \"kubernetes.io/projected/4ecfa1fe-2338-4679-96d8-04438f1809b0-kube-api-access-4x9n9\") pod \"certified-operators-pxmxb\" (UID: \"4ecfa1fe-2338-4679-96d8-04438f1809b0\") " pod="openshift-marketplace/certified-operators-pxmxb" Feb 19 11:16:53 crc kubenswrapper[4811]: I0219 11:16:53.473167 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pxmxb" Feb 19 11:16:53 crc kubenswrapper[4811]: I0219 11:16:53.617496 4811 generic.go:334] "Generic (PLEG): container finished" podID="4e7e033c-0702-43ed-93bc-86a7ef2d6e19" containerID="2e483b3e99f7c65997b11e96818d20b059e6e2f00196ccfb0e696bd79ea659a2" exitCode=0 Feb 19 11:16:53 crc kubenswrapper[4811]: I0219 11:16:53.617558 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fs5sp" Feb 19 11:16:53 crc kubenswrapper[4811]: I0219 11:16:53.617569 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fs5sp" event={"ID":"4e7e033c-0702-43ed-93bc-86a7ef2d6e19","Type":"ContainerDied","Data":"2e483b3e99f7c65997b11e96818d20b059e6e2f00196ccfb0e696bd79ea659a2"} Feb 19 11:16:53 crc kubenswrapper[4811]: I0219 11:16:53.617617 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fs5sp" event={"ID":"4e7e033c-0702-43ed-93bc-86a7ef2d6e19","Type":"ContainerDied","Data":"9e7797f30cb2df76f91aa41c1fde613d542b0e9037b7fbc09f87102ee7a38bd2"} Feb 19 11:16:53 crc kubenswrapper[4811]: I0219 11:16:53.617643 4811 scope.go:117] "RemoveContainer" containerID="2e483b3e99f7c65997b11e96818d20b059e6e2f00196ccfb0e696bd79ea659a2" Feb 19 11:16:53 crc kubenswrapper[4811]: I0219 11:16:53.642105 4811 scope.go:117] "RemoveContainer" containerID="3cdecbea43ca125b37cedbb72b1275d599974e3573d169c94109c36336ef6932" Feb 19 11:16:54 crc kubenswrapper[4811]: I0219 11:16:54.060758 4811 scope.go:117] "RemoveContainer" containerID="a2f3db4050bc83be7e46f24aeff996f0994e6c3bb1faf9ea007f351831aef4c9" Feb 19 11:16:54 crc kubenswrapper[4811]: I0219 11:16:54.345672 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pxmxb"] Feb 19 11:16:54 crc kubenswrapper[4811]: I0219 11:16:54.352326 4811 scope.go:117] "RemoveContainer" containerID="2e483b3e99f7c65997b11e96818d20b059e6e2f00196ccfb0e696bd79ea659a2" Feb 19 11:16:54 crc kubenswrapper[4811]: E0219 11:16:54.352896 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e483b3e99f7c65997b11e96818d20b059e6e2f00196ccfb0e696bd79ea659a2\": container with ID starting with 2e483b3e99f7c65997b11e96818d20b059e6e2f00196ccfb0e696bd79ea659a2 not found: ID does not exist" containerID="2e483b3e99f7c65997b11e96818d20b059e6e2f00196ccfb0e696bd79ea659a2" Feb 19 11:16:54 crc kubenswrapper[4811]: I0219 11:16:54.352933 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e483b3e99f7c65997b11e96818d20b059e6e2f00196ccfb0e696bd79ea659a2"} err="failed to get container status \"2e483b3e99f7c65997b11e96818d20b059e6e2f00196ccfb0e696bd79ea659a2\": rpc error: code = NotFound desc = could not find container \"2e483b3e99f7c65997b11e96818d20b059e6e2f00196ccfb0e696bd79ea659a2\": container with ID starting with 2e483b3e99f7c65997b11e96818d20b059e6e2f00196ccfb0e696bd79ea659a2 not found: ID does not exist" Feb 19 11:16:54 crc kubenswrapper[4811]: I0219 11:16:54.352955 4811 scope.go:117] "RemoveContainer" containerID="3cdecbea43ca125b37cedbb72b1275d599974e3573d169c94109c36336ef6932" Feb 19 11:16:54 crc kubenswrapper[4811]: E0219 11:16:54.356103 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cdecbea43ca125b37cedbb72b1275d599974e3573d169c94109c36336ef6932\": container with ID starting with 3cdecbea43ca125b37cedbb72b1275d599974e3573d169c94109c36336ef6932 not found: ID does not exist" containerID="3cdecbea43ca125b37cedbb72b1275d599974e3573d169c94109c36336ef6932" Feb 19 11:16:54 crc kubenswrapper[4811]: I0219 11:16:54.356133 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cdecbea43ca125b37cedbb72b1275d599974e3573d169c94109c36336ef6932"} err="failed to get container status \"3cdecbea43ca125b37cedbb72b1275d599974e3573d169c94109c36336ef6932\": rpc error: code = NotFound desc = could not find container \"3cdecbea43ca125b37cedbb72b1275d599974e3573d169c94109c36336ef6932\": container with ID starting with 3cdecbea43ca125b37cedbb72b1275d599974e3573d169c94109c36336ef6932 not found: ID does not exist" Feb 19 11:16:54 crc kubenswrapper[4811]: I0219 11:16:54.356149 4811 scope.go:117] "RemoveContainer" containerID="a2f3db4050bc83be7e46f24aeff996f0994e6c3bb1faf9ea007f351831aef4c9" Feb 19 11:16:54 crc kubenswrapper[4811]: E0219 11:16:54.356625 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2f3db4050bc83be7e46f24aeff996f0994e6c3bb1faf9ea007f351831aef4c9\": container with ID starting with a2f3db4050bc83be7e46f24aeff996f0994e6c3bb1faf9ea007f351831aef4c9 not found: ID does not exist" containerID="a2f3db4050bc83be7e46f24aeff996f0994e6c3bb1faf9ea007f351831aef4c9" Feb 19 11:16:54 crc kubenswrapper[4811]: I0219 11:16:54.356647 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2f3db4050bc83be7e46f24aeff996f0994e6c3bb1faf9ea007f351831aef4c9"} err="failed to get container status \"a2f3db4050bc83be7e46f24aeff996f0994e6c3bb1faf9ea007f351831aef4c9\": rpc error: code = NotFound desc = could not find container \"a2f3db4050bc83be7e46f24aeff996f0994e6c3bb1faf9ea007f351831aef4c9\": container with ID starting with a2f3db4050bc83be7e46f24aeff996f0994e6c3bb1faf9ea007f351831aef4c9 not found: ID does not exist" Feb 19 11:16:54 crc kubenswrapper[4811]: W0219 11:16:54.358462 4811 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ecfa1fe_2338_4679_96d8_04438f1809b0.slice/crio-8f0654d36353fd300e9065034403327dc3d8f9b771636f570d18e8d8757e31f3 WatchSource:0}: Error finding container 8f0654d36353fd300e9065034403327dc3d8f9b771636f570d18e8d8757e31f3: Status 404 returned error can't find the container with id 8f0654d36353fd300e9065034403327dc3d8f9b771636f570d18e8d8757e31f3 Feb 19 11:16:54 crc kubenswrapper[4811]: I0219 11:16:54.677358 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pxmxb" event={"ID":"4ecfa1fe-2338-4679-96d8-04438f1809b0","Type":"ContainerStarted","Data":"8f0654d36353fd300e9065034403327dc3d8f9b771636f570d18e8d8757e31f3"} Feb 19 11:16:54 crc kubenswrapper[4811]: I0219 11:16:54.717039 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e7e033c-0702-43ed-93bc-86a7ef2d6e19-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e7e033c-0702-43ed-93bc-86a7ef2d6e19" (UID: "4e7e033c-0702-43ed-93bc-86a7ef2d6e19"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:16:54 crc kubenswrapper[4811]: I0219 11:16:54.739106 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e7e033c-0702-43ed-93bc-86a7ef2d6e19-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 11:16:54 crc kubenswrapper[4811]: I0219 11:16:54.861879 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fs5sp"] Feb 19 11:16:54 crc kubenswrapper[4811]: I0219 11:16:54.872849 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fs5sp"] Feb 19 11:16:55 crc kubenswrapper[4811]: I0219 11:16:55.106309 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb_d439e144-6344-450b-bf1c-d141a94619d0/util/0.log" Feb 19 11:16:55 crc kubenswrapper[4811]: I0219 11:16:55.272795 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb_d439e144-6344-450b-bf1c-d141a94619d0/util/0.log" Feb 19 11:16:55 crc kubenswrapper[4811]: I0219 11:16:55.299774 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb_d439e144-6344-450b-bf1c-d141a94619d0/pull/0.log" Feb 19 11:16:55 crc kubenswrapper[4811]: I0219 11:16:55.313307 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb_d439e144-6344-450b-bf1c-d141a94619d0/pull/0.log" Feb 19 11:16:55 crc kubenswrapper[4811]: I0219 11:16:55.493907 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb_d439e144-6344-450b-bf1c-d141a94619d0/pull/0.log" Feb 19 11:16:55 crc kubenswrapper[4811]: I0219 11:16:55.498937 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb_d439e144-6344-450b-bf1c-d141a94619d0/util/0.log" Feb 19 11:16:55 crc kubenswrapper[4811]: I0219 11:16:55.530756 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4e4b2c2e4dd740ca928f2c3e98be087de1854d5a473f3dd8af3a14d175wpsfb_d439e144-6344-450b-bf1c-d141a94619d0/extract/0.log" Feb 19 11:16:55 crc kubenswrapper[4811]: I0219 11:16:55.688087 4811 generic.go:334] "Generic (PLEG): container finished" podID="4ecfa1fe-2338-4679-96d8-04438f1809b0" containerID="e1f1527f31b0f4d83c174002b39175fc4ae5124b3db89c7328d0a132d00b5e8d" exitCode=0 Feb 19 11:16:55 crc kubenswrapper[4811]: I0219 11:16:55.688142 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pxmxb" event={"ID":"4ecfa1fe-2338-4679-96d8-04438f1809b0","Type":"ContainerDied","Data":"e1f1527f31b0f4d83c174002b39175fc4ae5124b3db89c7328d0a132d00b5e8d"} Feb 19 11:16:55 crc kubenswrapper[4811]: I0219 11:16:55.907671 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-dzgff_f5e56e20-0181-4f50-a112-b90aeb0ea9f4/manager/0.log" Feb 19 11:16:56 crc kubenswrapper[4811]: I0219 11:16:56.207093 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-wh6gz_f39b017a-34d2-491b-a44a-c3fc5ea573b7/manager/0.log" Feb 19 11:16:56 crc kubenswrapper[4811]: I0219 11:16:56.489498 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-9fqp5_5bf4b28e-b5bd-4caa-a07e-b35c20fa16e7/manager/0.log" Feb 19 11:16:56 crc kubenswrapper[4811]: I0219 11:16:56.601672 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e7e033c-0702-43ed-93bc-86a7ef2d6e19" path="/var/lib/kubelet/pods/4e7e033c-0702-43ed-93bc-86a7ef2d6e19/volumes" Feb 19 11:16:56 crc kubenswrapper[4811]: I0219 11:16:56.703000 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pxmxb" event={"ID":"4ecfa1fe-2338-4679-96d8-04438f1809b0","Type":"ContainerStarted","Data":"e33a5b02777288c6e1bd678605e48a7d2cee4931ac2a9a89a06e7b42022c4c93"} Feb 19 11:16:56 crc kubenswrapper[4811]: I0219 11:16:56.782047 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-cjqjn_3c49a030-0a6a-434c-986e-6957d9d27043/manager/0.log" Feb 19 11:16:57 crc kubenswrapper[4811]: I0219 11:16:57.265720 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-d66mq_46b8deed-b9de-4b32-b8e4-c3edc4ad97f9/manager/0.log" Feb 19 11:16:57 crc kubenswrapper[4811]: I0219 11:16:57.520807 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-bqpcc_6b1bf169-cf7e-488d-a6c3-ee3dc0b92c58/manager/0.log" Feb 19 11:16:57 crc kubenswrapper[4811]: I0219 11:16:57.583497 4811 scope.go:117] "RemoveContainer" containerID="133c8091f936fab48e586b7bdce009236e4848e9a33aa81fe2541aee87912496" Feb 19 11:16:57 crc kubenswrapper[4811]: E0219 11:16:57.583780 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:16:57 crc kubenswrapper[4811]: I0219 11:16:57.718639 4811 generic.go:334] "Generic (PLEG): container finished" podID="4ecfa1fe-2338-4679-96d8-04438f1809b0" containerID="e33a5b02777288c6e1bd678605e48a7d2cee4931ac2a9a89a06e7b42022c4c93" exitCode=0 Feb 19 11:16:57 crc kubenswrapper[4811]: I0219 11:16:57.718704 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pxmxb" event={"ID":"4ecfa1fe-2338-4679-96d8-04438f1809b0","Type":"ContainerDied","Data":"e33a5b02777288c6e1bd678605e48a7d2cee4931ac2a9a89a06e7b42022c4c93"} Feb 19 11:16:57 crc kubenswrapper[4811]: I0219 11:16:57.916780 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-fvdc7_b5e1b149-0349-4c1c-be95-b9c540da49f1/manager/0.log" Feb 19 11:16:58 crc kubenswrapper[4811]: I0219 11:16:58.028012 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-f64jn_8841254d-fb8a-4391-984a-02a39949ca31/manager/0.log" Feb 19 11:16:58 crc kubenswrapper[4811]: I0219 11:16:58.313592 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-frxgp_fe03e75a-b898-4de2-8569-bce0add53798/manager/0.log" Feb 19 11:16:58 crc kubenswrapper[4811]: I0219 11:16:58.412419 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-c84zh_f7a1ee5c-f40c-4b32-9ff9-46d3a11f64a3/manager/0.log" Feb 19 11:16:58 crc kubenswrapper[4811]: I0219 11:16:58.489707 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-rdtd4_03bff346-c835-4fba-9950-59e9e86d8a23/manager/0.log" Feb 19 11:16:58 crc kubenswrapper[4811]: I0219 11:16:58.730082 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pxmxb" event={"ID":"4ecfa1fe-2338-4679-96d8-04438f1809b0","Type":"ContainerStarted","Data":"13c507cea2a82bdb4f19c84744566c162a873b294609a26e919d6b1cf13cdc19"} Feb 19 11:16:58 crc kubenswrapper[4811]: I0219 11:16:58.761163 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-2lvw2_6c3aab50-1956-41f1-be69-aa976020f25e/manager/0.log" Feb 19 11:16:58 crc kubenswrapper[4811]: I0219 11:16:58.962328 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cl5drv_484c4e4b-0de6-427c-adde-b597fed189a7/manager/0.log" Feb 19 11:16:59 crc kubenswrapper[4811]: I0219 11:16:59.499404 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-d974764c4-flc2t_540975d2-b9cd-492e-9618-c4d31baf74a6/operator/0.log" Feb 19 11:16:59 crc kubenswrapper[4811]: I0219 11:16:59.751031 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-9fpk8_f23b3244-ec1f-411f-8dd2-15513a0c6c17/registry-server/0.log" Feb 19 11:17:00 crc kubenswrapper[4811]: I0219 11:17:00.105515 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-h5g6x_ed315fb7-c37f-4d65-a048-78c1a354ceb3/manager/0.log" Feb 19 11:17:00 crc kubenswrapper[4811]: I0219 11:17:00.335554 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-4tg5n_0dd03f3f-2d9f-433e-942f-ebed08429ccf/manager/0.log" Feb 19 11:17:00 crc kubenswrapper[4811]: I0219 11:17:00.593669 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-5vw7t_9def098d-8e58-4145-a851-134a7b2620b5/operator/0.log" Feb 19 11:17:00 crc kubenswrapper[4811]: I0219 11:17:00.820026 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-6f8zw_058e8da6-f607-4153-bb9e-af486f6c87e7/manager/0.log" Feb 19 11:17:01 crc kubenswrapper[4811]: I0219 11:17:01.149055 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f45b4ff68-456kj_eab9f83b-209d-4688-8c7f-07840bae01ff/manager/0.log" Feb 19 11:17:01 crc kubenswrapper[4811]: I0219 11:17:01.352955 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-8xp7b_75dd64a6-abdc-4d68-b9ca-abe41484b894/manager/0.log" Feb 19 11:17:01 crc kubenswrapper[4811]: I0219 11:17:01.581289 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-czvjr_b4dc2b86-5147-4420-a9ef-a2ef94991ed3/manager/0.log" Feb 19 11:17:01 crc kubenswrapper[4811]: I0219 11:17:01.605613 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7b558f465f-z4vhq_7f4a5782-be5f-4a0d-be5d-b8a5014c4711/manager/0.log" Feb 19 11:17:01 crc kubenswrapper[4811]: I0219 11:17:01.797755 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-jzwmv_34488f24-fe02-449f-a343-288ee29f5ec4/manager/0.log" Feb 19 11:17:03 crc kubenswrapper[4811]: I0219 11:17:03.473320 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pxmxb" Feb 19 11:17:03 crc kubenswrapper[4811]: I0219 11:17:03.473691 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pxmxb" Feb 19 11:17:03 crc kubenswrapper[4811]: I0219 11:17:03.529010 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pxmxb" Feb 19 11:17:03 crc kubenswrapper[4811]: I0219 11:17:03.562545 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pxmxb" podStartSLOduration=8.155444352 podStartE2EDuration="10.562517338s" podCreationTimestamp="2026-02-19 11:16:53 +0000 UTC" firstStartedPulling="2026-02-19 11:16:55.690227782 +0000 UTC m=+4104.216692468" lastFinishedPulling="2026-02-19 11:16:58.097300768 +0000 UTC m=+4106.623765454" observedRunningTime="2026-02-19 11:16:58.749235768 +0000 UTC m=+4107.275700464" watchObservedRunningTime="2026-02-19 11:17:03.562517338 +0000 UTC m=+4112.088982014" Feb 19 11:17:03 crc kubenswrapper[4811]: I0219 11:17:03.837150 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pxmxb" Feb 19 11:17:03 crc kubenswrapper[4811]: I0219 11:17:03.899298 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pxmxb"] Feb 19 11:17:05 crc kubenswrapper[4811]: I0219 11:17:05.795113 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pxmxb" podUID="4ecfa1fe-2338-4679-96d8-04438f1809b0" containerName="registry-server" containerID="cri-o://13c507cea2a82bdb4f19c84744566c162a873b294609a26e919d6b1cf13cdc19" gracePeriod=2 Feb 19 11:17:06 crc kubenswrapper[4811]: I0219 11:17:06.308372 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pxmxb" Feb 19 11:17:06 crc kubenswrapper[4811]: I0219 11:17:06.364800 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4x9n9\" (UniqueName: \"kubernetes.io/projected/4ecfa1fe-2338-4679-96d8-04438f1809b0-kube-api-access-4x9n9\") pod \"4ecfa1fe-2338-4679-96d8-04438f1809b0\" (UID: \"4ecfa1fe-2338-4679-96d8-04438f1809b0\") " Feb 19 11:17:06 crc kubenswrapper[4811]: I0219 11:17:06.364983 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ecfa1fe-2338-4679-96d8-04438f1809b0-utilities\") pod \"4ecfa1fe-2338-4679-96d8-04438f1809b0\" (UID: \"4ecfa1fe-2338-4679-96d8-04438f1809b0\") " Feb 19 11:17:06 crc kubenswrapper[4811]: I0219 11:17:06.365163 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ecfa1fe-2338-4679-96d8-04438f1809b0-catalog-content\") pod \"4ecfa1fe-2338-4679-96d8-04438f1809b0\" (UID: \"4ecfa1fe-2338-4679-96d8-04438f1809b0\") " Feb 19 11:17:06 crc kubenswrapper[4811]: I0219 11:17:06.366378 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ecfa1fe-2338-4679-96d8-04438f1809b0-utilities" (OuterVolumeSpecName: "utilities") pod "4ecfa1fe-2338-4679-96d8-04438f1809b0" (UID: "4ecfa1fe-2338-4679-96d8-04438f1809b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:17:06 crc kubenswrapper[4811]: I0219 11:17:06.372702 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ecfa1fe-2338-4679-96d8-04438f1809b0-kube-api-access-4x9n9" (OuterVolumeSpecName: "kube-api-access-4x9n9") pod "4ecfa1fe-2338-4679-96d8-04438f1809b0" (UID: "4ecfa1fe-2338-4679-96d8-04438f1809b0"). InnerVolumeSpecName "kube-api-access-4x9n9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:17:06 crc kubenswrapper[4811]: I0219 11:17:06.466761 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4x9n9\" (UniqueName: \"kubernetes.io/projected/4ecfa1fe-2338-4679-96d8-04438f1809b0-kube-api-access-4x9n9\") on node \"crc\" DevicePath \"\"" Feb 19 11:17:06 crc kubenswrapper[4811]: I0219 11:17:06.467066 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ecfa1fe-2338-4679-96d8-04438f1809b0-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 11:17:06 crc kubenswrapper[4811]: I0219 11:17:06.479200 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ecfa1fe-2338-4679-96d8-04438f1809b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ecfa1fe-2338-4679-96d8-04438f1809b0" (UID: "4ecfa1fe-2338-4679-96d8-04438f1809b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:17:06 crc kubenswrapper[4811]: I0219 11:17:06.569041 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ecfa1fe-2338-4679-96d8-04438f1809b0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 11:17:06 crc kubenswrapper[4811]: I0219 11:17:06.806903 4811 generic.go:334] "Generic (PLEG): container finished" podID="4ecfa1fe-2338-4679-96d8-04438f1809b0" containerID="13c507cea2a82bdb4f19c84744566c162a873b294609a26e919d6b1cf13cdc19" exitCode=0 Feb 19 11:17:06 crc kubenswrapper[4811]: I0219 11:17:06.807002 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pxmxb" event={"ID":"4ecfa1fe-2338-4679-96d8-04438f1809b0","Type":"ContainerDied","Data":"13c507cea2a82bdb4f19c84744566c162a873b294609a26e919d6b1cf13cdc19"} Feb 19 11:17:06 crc kubenswrapper[4811]: I0219 11:17:06.807254 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pxmxb" event={"ID":"4ecfa1fe-2338-4679-96d8-04438f1809b0","Type":"ContainerDied","Data":"8f0654d36353fd300e9065034403327dc3d8f9b771636f570d18e8d8757e31f3"} Feb 19 11:17:06 crc kubenswrapper[4811]: I0219 11:17:06.807280 4811 scope.go:117] "RemoveContainer" containerID="13c507cea2a82bdb4f19c84744566c162a873b294609a26e919d6b1cf13cdc19" Feb 19 11:17:06 crc kubenswrapper[4811]: I0219 11:17:06.807041 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pxmxb" Feb 19 11:17:06 crc kubenswrapper[4811]: I0219 11:17:06.860579 4811 scope.go:117] "RemoveContainer" containerID="e33a5b02777288c6e1bd678605e48a7d2cee4931ac2a9a89a06e7b42022c4c93" Feb 19 11:17:06 crc kubenswrapper[4811]: I0219 11:17:06.872077 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pxmxb"] Feb 19 11:17:06 crc kubenswrapper[4811]: I0219 11:17:06.898240 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pxmxb"] Feb 19 11:17:06 crc kubenswrapper[4811]: I0219 11:17:06.911718 4811 scope.go:117] "RemoveContainer" containerID="e1f1527f31b0f4d83c174002b39175fc4ae5124b3db89c7328d0a132d00b5e8d" Feb 19 11:17:06 crc kubenswrapper[4811]: I0219 11:17:06.952494 4811 scope.go:117] "RemoveContainer" containerID="13c507cea2a82bdb4f19c84744566c162a873b294609a26e919d6b1cf13cdc19" Feb 19 11:17:06 crc kubenswrapper[4811]: E0219 11:17:06.953541 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13c507cea2a82bdb4f19c84744566c162a873b294609a26e919d6b1cf13cdc19\": container with ID starting with 13c507cea2a82bdb4f19c84744566c162a873b294609a26e919d6b1cf13cdc19 not found: ID does not exist" containerID="13c507cea2a82bdb4f19c84744566c162a873b294609a26e919d6b1cf13cdc19" Feb 19 11:17:06 crc kubenswrapper[4811]: I0219 11:17:06.953580 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13c507cea2a82bdb4f19c84744566c162a873b294609a26e919d6b1cf13cdc19"} err="failed to get container status \"13c507cea2a82bdb4f19c84744566c162a873b294609a26e919d6b1cf13cdc19\": rpc error: code = NotFound desc = could not find container \"13c507cea2a82bdb4f19c84744566c162a873b294609a26e919d6b1cf13cdc19\": container with ID starting with 13c507cea2a82bdb4f19c84744566c162a873b294609a26e919d6b1cf13cdc19 not found: ID does not exist" Feb 19 11:17:06 crc kubenswrapper[4811]: I0219 11:17:06.953614 4811 scope.go:117] "RemoveContainer" containerID="e33a5b02777288c6e1bd678605e48a7d2cee4931ac2a9a89a06e7b42022c4c93" Feb 19 11:17:06 crc kubenswrapper[4811]: E0219 11:17:06.955887 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e33a5b02777288c6e1bd678605e48a7d2cee4931ac2a9a89a06e7b42022c4c93\": container with ID starting with e33a5b02777288c6e1bd678605e48a7d2cee4931ac2a9a89a06e7b42022c4c93 not found: ID does not exist" containerID="e33a5b02777288c6e1bd678605e48a7d2cee4931ac2a9a89a06e7b42022c4c93" Feb 19 11:17:06 crc kubenswrapper[4811]: I0219 11:17:06.955941 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e33a5b02777288c6e1bd678605e48a7d2cee4931ac2a9a89a06e7b42022c4c93"} err="failed to get container status \"e33a5b02777288c6e1bd678605e48a7d2cee4931ac2a9a89a06e7b42022c4c93\": rpc error: code = NotFound desc = could not find container \"e33a5b02777288c6e1bd678605e48a7d2cee4931ac2a9a89a06e7b42022c4c93\": container with ID starting with e33a5b02777288c6e1bd678605e48a7d2cee4931ac2a9a89a06e7b42022c4c93 not found: ID does not exist" Feb 19 11:17:06 crc kubenswrapper[4811]: I0219 11:17:06.955989 4811 scope.go:117] "RemoveContainer" containerID="e1f1527f31b0f4d83c174002b39175fc4ae5124b3db89c7328d0a132d00b5e8d" Feb 19 11:17:06 crc kubenswrapper[4811]: E0219 11:17:06.956521 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1f1527f31b0f4d83c174002b39175fc4ae5124b3db89c7328d0a132d00b5e8d\": container with ID starting with e1f1527f31b0f4d83c174002b39175fc4ae5124b3db89c7328d0a132d00b5e8d not found: ID does not exist" containerID="e1f1527f31b0f4d83c174002b39175fc4ae5124b3db89c7328d0a132d00b5e8d" Feb 19 11:17:06 crc kubenswrapper[4811]: I0219 11:17:06.956674 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1f1527f31b0f4d83c174002b39175fc4ae5124b3db89c7328d0a132d00b5e8d"} err="failed to get container status \"e1f1527f31b0f4d83c174002b39175fc4ae5124b3db89c7328d0a132d00b5e8d\": rpc error: code = NotFound desc = could not find container \"e1f1527f31b0f4d83c174002b39175fc4ae5124b3db89c7328d0a132d00b5e8d\": container with ID starting with e1f1527f31b0f4d83c174002b39175fc4ae5124b3db89c7328d0a132d00b5e8d not found: ID does not exist" Feb 19 11:17:07 crc kubenswrapper[4811]: I0219 11:17:07.451648 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-s42cl_57410614-54e3-49cb-8afe-ee60a861f161/manager/0.log" Feb 19 11:17:08 crc kubenswrapper[4811]: I0219 11:17:08.594636 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ecfa1fe-2338-4679-96d8-04438f1809b0" path="/var/lib/kubelet/pods/4ecfa1fe-2338-4679-96d8-04438f1809b0/volumes" Feb 19 11:17:11 crc kubenswrapper[4811]: I0219 11:17:11.584730 4811 scope.go:117] "RemoveContainer" containerID="133c8091f936fab48e586b7bdce009236e4848e9a33aa81fe2541aee87912496" Feb 19 11:17:11 crc kubenswrapper[4811]: E0219 11:17:11.587305 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:17:24 crc kubenswrapper[4811]: I0219 11:17:24.562969 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-84dtz_1fcdf22f-c384-45cd-a738-a6ec8a9d181e/control-plane-machine-set-operator/0.log" Feb 19 11:17:24 crc kubenswrapper[4811]: I0219 11:17:24.770915 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-bf2qf_077abdec-a39e-43d3-bbba-8dd0afbb448b/kube-rbac-proxy/0.log" Feb 19 11:17:24 crc kubenswrapper[4811]: I0219 11:17:24.794330 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-bf2qf_077abdec-a39e-43d3-bbba-8dd0afbb448b/machine-api-operator/0.log" Feb 19 11:17:26 crc kubenswrapper[4811]: I0219 11:17:26.583990 4811 scope.go:117] "RemoveContainer" containerID="133c8091f936fab48e586b7bdce009236e4848e9a33aa81fe2541aee87912496" Feb 19 11:17:26 crc kubenswrapper[4811]: E0219 11:17:26.584667 4811 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v97zm_openshift-machine-config-operator(e2e42c80-bece-4066-abad-05bc661d33f5)\"" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" Feb 19 11:17:40 crc kubenswrapper[4811]: I0219 11:17:40.231940 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-npddf_09c94fb6-b7d0-434f-8f41-d8dc7ff56e76/cert-manager-controller/0.log" Feb 19 11:17:40 crc kubenswrapper[4811]: I0219 11:17:40.440742 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-wh2j5_c5c3873a-d5b3-4fb9-8ac2-118ae63676a0/cert-manager-cainjector/0.log" Feb 19 11:17:40 crc kubenswrapper[4811]: I0219 11:17:40.534410 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-vl4s4_d9b74399-e5df-4e67-b52d-85b37e1a5125/cert-manager-webhook/0.log" Feb 19 11:17:40 crc kubenswrapper[4811]: I0219 11:17:40.583600 4811 scope.go:117] "RemoveContainer" containerID="133c8091f936fab48e586b7bdce009236e4848e9a33aa81fe2541aee87912496" Feb 19 11:17:41 crc kubenswrapper[4811]: I0219 11:17:41.157551 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" event={"ID":"e2e42c80-bece-4066-abad-05bc661d33f5","Type":"ContainerStarted","Data":"e1b9cc05decc621cd3844b8e41e9ddb80370e9d3ed02ccf21f44a8835a167d87"} Feb 19 11:17:55 crc kubenswrapper[4811]: I0219 11:17:55.181887 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-4lhw2_4f52fbce-dab0-4cda-b2c7-80a4caab9fff/nmstate-console-plugin/0.log" Feb 19 11:17:55 crc kubenswrapper[4811]: I0219 11:17:55.608438 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-lcv4q_56faa147-2a2c-437e-a221-88fa87e89a56/nmstate-handler/0.log" Feb 19 11:17:55 crc kubenswrapper[4811]: I0219 11:17:55.613908 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-xgsh7_d74c5c61-8e57-4702-8414-0e94a5080ce5/kube-rbac-proxy/0.log" Feb 19 11:17:55 crc kubenswrapper[4811]: I0219 11:17:55.791752 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-xgsh7_d74c5c61-8e57-4702-8414-0e94a5080ce5/nmstate-metrics/0.log" Feb 19 11:17:55 crc kubenswrapper[4811]: I0219 11:17:55.834834 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-s2r8d_cc55c956-c42d-49cc-a97b-cbb28863cdfd/nmstate-operator/0.log" Feb 19 11:17:55 crc kubenswrapper[4811]: I0219 11:17:55.991951 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-pdhvl_8635072a-afd3-426c-bed8-f0aa9ddf1371/nmstate-webhook/0.log" Feb 19 11:18:25 crc kubenswrapper[4811]: I0219 11:18:25.141151 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-7sw8d_9eec6b61-1b8d-4615-8333-d93630c46ce2/kube-rbac-proxy/0.log" Feb 19 11:18:25 crc kubenswrapper[4811]: I0219 11:18:25.211739 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-7sw8d_9eec6b61-1b8d-4615-8333-d93630c46ce2/controller/0.log" Feb 19 11:18:25 crc kubenswrapper[4811]: I0219 11:18:25.621779 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ngn4k_84b9332e-fb14-4b7e-844e-d902809b4e4c/cp-frr-files/0.log" Feb 19 11:18:25 crc kubenswrapper[4811]: I0219 11:18:25.795527 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ngn4k_84b9332e-fb14-4b7e-844e-d902809b4e4c/cp-frr-files/0.log" Feb 19 11:18:25 crc kubenswrapper[4811]: I0219 11:18:25.803245 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ngn4k_84b9332e-fb14-4b7e-844e-d902809b4e4c/cp-reloader/0.log" Feb 19 11:18:25 crc kubenswrapper[4811]: I0219 11:18:25.825745 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ngn4k_84b9332e-fb14-4b7e-844e-d902809b4e4c/cp-reloader/0.log" Feb 19 11:18:25 crc kubenswrapper[4811]: I0219 11:18:25.830995 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ngn4k_84b9332e-fb14-4b7e-844e-d902809b4e4c/cp-metrics/0.log" Feb 19 11:18:25 crc kubenswrapper[4811]: I0219 11:18:25.985048 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ngn4k_84b9332e-fb14-4b7e-844e-d902809b4e4c/cp-reloader/0.log" Feb 19 11:18:26 crc kubenswrapper[4811]: I0219 11:18:26.002843 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ngn4k_84b9332e-fb14-4b7e-844e-d902809b4e4c/cp-frr-files/0.log" Feb 19 11:18:26 crc kubenswrapper[4811]: I0219 11:18:26.028432 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ngn4k_84b9332e-fb14-4b7e-844e-d902809b4e4c/cp-metrics/0.log" Feb 19 11:18:26 crc kubenswrapper[4811]: I0219 11:18:26.061310 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ngn4k_84b9332e-fb14-4b7e-844e-d902809b4e4c/cp-metrics/0.log" Feb 19 11:18:26 crc kubenswrapper[4811]: I0219 11:18:26.225766 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ngn4k_84b9332e-fb14-4b7e-844e-d902809b4e4c/cp-reloader/0.log" Feb 19 11:18:26 crc kubenswrapper[4811]: I0219 11:18:26.228401 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ngn4k_84b9332e-fb14-4b7e-844e-d902809b4e4c/cp-frr-files/0.log" Feb 19 11:18:26 crc kubenswrapper[4811]: I0219 11:18:26.239793 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ngn4k_84b9332e-fb14-4b7e-844e-d902809b4e4c/cp-metrics/0.log" Feb 19 11:18:26 crc kubenswrapper[4811]: I0219 11:18:26.258971 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ngn4k_84b9332e-fb14-4b7e-844e-d902809b4e4c/controller/0.log" Feb 19 11:18:26 crc kubenswrapper[4811]: I0219 11:18:26.459179 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ngn4k_84b9332e-fb14-4b7e-844e-d902809b4e4c/kube-rbac-proxy/0.log" Feb 19 11:18:26 crc kubenswrapper[4811]: I0219 11:18:26.471843 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ngn4k_84b9332e-fb14-4b7e-844e-d902809b4e4c/frr-metrics/0.log" Feb 19 11:18:26 crc kubenswrapper[4811]: I0219 11:18:26.484363 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ngn4k_84b9332e-fb14-4b7e-844e-d902809b4e4c/kube-rbac-proxy-frr/0.log" Feb 19 11:18:26 crc kubenswrapper[4811]: I0219 11:18:26.731675 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ngn4k_84b9332e-fb14-4b7e-844e-d902809b4e4c/reloader/0.log" Feb 19 11:18:26 crc kubenswrapper[4811]: I0219 11:18:26.803801 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-fx4zj_5742d3da-f213-4a03-a18d-55cbe88d82b5/frr-k8s-webhook-server/0.log" Feb 19 11:18:26 crc kubenswrapper[4811]: I0219 11:18:26.982652 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6b977f8b79-xtsfv_b4c117aa-5a81-4f2a-88d0-de69a1da3c84/manager/0.log" Feb 19 11:18:27 crc kubenswrapper[4811]: I0219 11:18:27.197524 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-c47dd88fc-mx6n7_3647a1c7-33fe-440f-b648-ace58f1deb0d/webhook-server/0.log" Feb 19 11:18:27 crc kubenswrapper[4811]: I0219 11:18:27.250462 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jswp7_4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461/kube-rbac-proxy/0.log" Feb 19 11:18:27 crc kubenswrapper[4811]: I0219 11:18:27.924256 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jswp7_4bb4dc5d-a3e6-4fde-b2cb-1d1e5d0e6461/speaker/0.log" Feb 19 11:18:27 crc kubenswrapper[4811]: I0219 11:18:27.997200 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ngn4k_84b9332e-fb14-4b7e-844e-d902809b4e4c/frr/0.log" Feb 19 11:18:42 crc kubenswrapper[4811]: I0219 11:18:42.716934 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv_2dfded96-366b-42d7-b04a-bfa8e4a203a8/util/0.log" Feb 19 11:18:42 crc kubenswrapper[4811]: I0219 11:18:42.914953 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv_2dfded96-366b-42d7-b04a-bfa8e4a203a8/util/0.log" Feb 19 11:18:42 crc kubenswrapper[4811]: I0219 11:18:42.936266 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv_2dfded96-366b-42d7-b04a-bfa8e4a203a8/pull/0.log" Feb 19 11:18:42 crc kubenswrapper[4811]: I0219 11:18:42.956581 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv_2dfded96-366b-42d7-b04a-bfa8e4a203a8/pull/0.log" Feb 19 11:18:43 crc kubenswrapper[4811]: I0219 11:18:43.188640 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv_2dfded96-366b-42d7-b04a-bfa8e4a203a8/util/0.log" Feb 19 11:18:43 crc kubenswrapper[4811]: I0219 11:18:43.194723 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv_2dfded96-366b-42d7-b04a-bfa8e4a203a8/extract/0.log" Feb 19 11:18:43 crc kubenswrapper[4811]: I0219 11:18:43.219346 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213p48tv_2dfded96-366b-42d7-b04a-bfa8e4a203a8/pull/0.log" Feb 19 11:18:43 crc kubenswrapper[4811]: I0219 11:18:43.368183 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qftq4_8b733b4b-8597-484d-a6a6-b5635e80b483/extract-utilities/0.log" Feb 19 11:18:43 crc kubenswrapper[4811]: I0219 11:18:43.516773 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qftq4_8b733b4b-8597-484d-a6a6-b5635e80b483/extract-content/0.log" Feb 19 11:18:43 crc kubenswrapper[4811]: I0219 11:18:43.527931 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qftq4_8b733b4b-8597-484d-a6a6-b5635e80b483/extract-utilities/0.log" Feb 19 11:18:43 crc kubenswrapper[4811]: I0219 11:18:43.558611 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qftq4_8b733b4b-8597-484d-a6a6-b5635e80b483/extract-content/0.log" Feb 19 11:18:43 crc kubenswrapper[4811]: I0219 11:18:43.714022 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qftq4_8b733b4b-8597-484d-a6a6-b5635e80b483/extract-content/0.log" Feb 19 11:18:43 crc kubenswrapper[4811]: I0219 11:18:43.725771 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qftq4_8b733b4b-8597-484d-a6a6-b5635e80b483/extract-utilities/0.log" Feb 19 11:18:43 crc kubenswrapper[4811]: I0219 11:18:43.945769 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-grnvj_8983e037-aaa4-4bc6-82a1-8f312808a88a/extract-utilities/0.log" Feb 19 11:18:44 crc kubenswrapper[4811]: I0219 11:18:44.212029 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-grnvj_8983e037-aaa4-4bc6-82a1-8f312808a88a/extract-utilities/0.log" Feb 19 11:18:44 crc kubenswrapper[4811]: I0219 11:18:44.228980 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-grnvj_8983e037-aaa4-4bc6-82a1-8f312808a88a/extract-content/0.log" Feb 19 11:18:44 crc kubenswrapper[4811]: I0219 11:18:44.240808 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-grnvj_8983e037-aaa4-4bc6-82a1-8f312808a88a/extract-content/0.log" Feb 19 11:18:44 crc kubenswrapper[4811]: I0219 11:18:44.245224 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qftq4_8b733b4b-8597-484d-a6a6-b5635e80b483/registry-server/0.log" Feb 19 11:18:44 crc kubenswrapper[4811]: I0219 11:18:44.730187 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-grnvj_8983e037-aaa4-4bc6-82a1-8f312808a88a/extract-content/0.log" Feb 19 11:18:44 crc kubenswrapper[4811]: I0219 11:18:44.736014 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-grnvj_8983e037-aaa4-4bc6-82a1-8f312808a88a/extract-utilities/0.log" Feb 19 11:18:45 crc kubenswrapper[4811]: I0219 11:18:45.163532 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7_ee3ec824-bdc8-4428-8c43-82503b8f2760/util/0.log" Feb 19 11:18:45 crc kubenswrapper[4811]: I0219 11:18:45.304807 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-grnvj_8983e037-aaa4-4bc6-82a1-8f312808a88a/registry-server/0.log" Feb 19 11:18:45 crc kubenswrapper[4811]: I0219 11:18:45.379557 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7_ee3ec824-bdc8-4428-8c43-82503b8f2760/pull/0.log" Feb 19 11:18:45 crc kubenswrapper[4811]: I0219 11:18:45.390031 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7_ee3ec824-bdc8-4428-8c43-82503b8f2760/util/0.log" Feb 19 11:18:45 crc kubenswrapper[4811]: I0219 11:18:45.409574 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7_ee3ec824-bdc8-4428-8c43-82503b8f2760/pull/0.log" Feb 19 11:18:45 crc kubenswrapper[4811]: I0219 11:18:45.629139 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7_ee3ec824-bdc8-4428-8c43-82503b8f2760/pull/0.log" Feb 19 11:18:45 crc kubenswrapper[4811]: I0219 11:18:45.679386 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7_ee3ec824-bdc8-4428-8c43-82503b8f2760/util/0.log" Feb 19 11:18:45 crc kubenswrapper[4811]: I0219 11:18:45.701652 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca229v7_ee3ec824-bdc8-4428-8c43-82503b8f2760/extract/0.log" Feb 19 11:18:45 crc kubenswrapper[4811]: I0219 11:18:45.833141 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-7lcg9_5fd28c56-84ca-4b57-af19-9c11dc95b397/marketplace-operator/0.log" Feb 19 11:18:45 crc kubenswrapper[4811]: I0219 11:18:45.879939 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-82lbs_587cc9ad-4897-4e3e-b6ee-fdbc9ee4532e/extract-utilities/0.log" Feb 19 11:18:46 crc kubenswrapper[4811]: I0219 11:18:46.096068 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-82lbs_587cc9ad-4897-4e3e-b6ee-fdbc9ee4532e/extract-content/0.log" Feb 19 11:18:46 crc kubenswrapper[4811]: I0219 11:18:46.117833 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-82lbs_587cc9ad-4897-4e3e-b6ee-fdbc9ee4532e/extract-utilities/0.log" Feb 19 11:18:46 crc kubenswrapper[4811]: I0219 11:18:46.126557 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-82lbs_587cc9ad-4897-4e3e-b6ee-fdbc9ee4532e/extract-content/0.log" Feb 19 11:18:46 crc kubenswrapper[4811]: I0219 11:18:46.864672 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-82lbs_587cc9ad-4897-4e3e-b6ee-fdbc9ee4532e/extract-utilities/0.log" Feb 19 11:18:46 crc kubenswrapper[4811]: I0219 11:18:46.878606 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-82lbs_587cc9ad-4897-4e3e-b6ee-fdbc9ee4532e/extract-content/0.log" Feb 19 11:18:46 crc kubenswrapper[4811]: I0219 11:18:46.916924 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-82lbs_587cc9ad-4897-4e3e-b6ee-fdbc9ee4532e/registry-server/0.log" Feb 19 11:18:47 crc kubenswrapper[4811]: I0219 11:18:47.074226 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7r9tk_0ea7e423-8a9e-4b21-a701-0682236bec67/extract-utilities/0.log" Feb 19 11:18:47 crc kubenswrapper[4811]: I0219 11:18:47.309644 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7r9tk_0ea7e423-8a9e-4b21-a701-0682236bec67/extract-utilities/0.log" Feb 19 11:18:47 crc kubenswrapper[4811]: I0219 11:18:47.369494 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7r9tk_0ea7e423-8a9e-4b21-a701-0682236bec67/extract-content/0.log" Feb 19 11:18:47 crc kubenswrapper[4811]: I0219 11:18:47.400137 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7r9tk_0ea7e423-8a9e-4b21-a701-0682236bec67/extract-content/0.log" Feb 19 11:18:47 crc kubenswrapper[4811]: I0219 11:18:47.580726 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7r9tk_0ea7e423-8a9e-4b21-a701-0682236bec67/extract-utilities/0.log" Feb 19 11:18:47 crc kubenswrapper[4811]: I0219 11:18:47.630033 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7r9tk_0ea7e423-8a9e-4b21-a701-0682236bec67/extract-content/0.log" Feb 19 11:18:48 crc kubenswrapper[4811]: I0219 11:18:48.147144 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7r9tk_0ea7e423-8a9e-4b21-a701-0682236bec67/registry-server/0.log" Feb 19 11:19:48 crc kubenswrapper[4811]: I0219 11:19:48.239051 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-ntk89" podUID="a71943f3-8cb9-48ff-aaf0-38a30132ffd4" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 11:19:56 crc kubenswrapper[4811]: I0219 11:19:56.790683 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 11:19:56 crc kubenswrapper[4811]: I0219 11:19:56.791803 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 11:20:26 crc kubenswrapper[4811]: I0219 11:20:26.788801 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 11:20:26 crc kubenswrapper[4811]: I0219 11:20:26.789624 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 11:20:44 crc kubenswrapper[4811]: I0219 11:20:44.907790 4811 generic.go:334] "Generic (PLEG): container finished" podID="fd594f5c-e579-4c78-a29a-6ea266fd98d0" containerID="2efc4e02dd4cf932af3fb38642c88501555323989adb2a9d1dc26a2048d50272" exitCode=0 Feb 19 11:20:44 crc kubenswrapper[4811]: I0219 11:20:44.907928 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fj7sq/must-gather-842pr" event={"ID":"fd594f5c-e579-4c78-a29a-6ea266fd98d0","Type":"ContainerDied","Data":"2efc4e02dd4cf932af3fb38642c88501555323989adb2a9d1dc26a2048d50272"} Feb 19 11:20:44 crc kubenswrapper[4811]: I0219 11:20:44.909106 4811 scope.go:117] "RemoveContainer" containerID="2efc4e02dd4cf932af3fb38642c88501555323989adb2a9d1dc26a2048d50272" Feb 19 11:20:45 crc kubenswrapper[4811]: I0219 11:20:45.062119 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-fj7sq_must-gather-842pr_fd594f5c-e579-4c78-a29a-6ea266fd98d0/gather/0.log" Feb 19 11:20:56 crc kubenswrapper[4811]: I0219 11:20:56.230160 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-fj7sq/must-gather-842pr"] Feb 19 11:20:56 crc kubenswrapper[4811]: I0219 11:20:56.231294 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-fj7sq/must-gather-842pr" podUID="fd594f5c-e579-4c78-a29a-6ea266fd98d0" containerName="copy" containerID="cri-o://386d30eb6bd51cfe4d24fc3dda6ae2b10bf5d73bb0f4aa1f2d548c2f724bb688" gracePeriod=2 Feb 19 11:20:56 crc kubenswrapper[4811]: I0219 11:20:56.246374 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-fj7sq/must-gather-842pr"] Feb 19 11:20:56 crc kubenswrapper[4811]: I0219 11:20:56.663331 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-fj7sq_must-gather-842pr_fd594f5c-e579-4c78-a29a-6ea266fd98d0/copy/0.log" Feb 19 11:20:56 crc kubenswrapper[4811]: I0219 11:20:56.664295 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fj7sq/must-gather-842pr" Feb 19 11:20:56 crc kubenswrapper[4811]: I0219 11:20:56.673100 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fd594f5c-e579-4c78-a29a-6ea266fd98d0-must-gather-output\") pod \"fd594f5c-e579-4c78-a29a-6ea266fd98d0\" (UID: \"fd594f5c-e579-4c78-a29a-6ea266fd98d0\") " Feb 19 11:20:56 crc kubenswrapper[4811]: I0219 11:20:56.673475 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phf26\" (UniqueName: \"kubernetes.io/projected/fd594f5c-e579-4c78-a29a-6ea266fd98d0-kube-api-access-phf26\") pod \"fd594f5c-e579-4c78-a29a-6ea266fd98d0\" (UID: \"fd594f5c-e579-4c78-a29a-6ea266fd98d0\") " Feb 19 11:20:56 crc kubenswrapper[4811]: I0219 11:20:56.682190 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd594f5c-e579-4c78-a29a-6ea266fd98d0-kube-api-access-phf26" (OuterVolumeSpecName: "kube-api-access-phf26") pod "fd594f5c-e579-4c78-a29a-6ea266fd98d0" (UID: "fd594f5c-e579-4c78-a29a-6ea266fd98d0"). InnerVolumeSpecName "kube-api-access-phf26". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:20:56 crc kubenswrapper[4811]: I0219 11:20:56.775710 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phf26\" (UniqueName: \"kubernetes.io/projected/fd594f5c-e579-4c78-a29a-6ea266fd98d0-kube-api-access-phf26\") on node \"crc\" DevicePath \"\"" Feb 19 11:20:56 crc kubenswrapper[4811]: I0219 11:20:56.788062 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 11:20:56 crc kubenswrapper[4811]: I0219 11:20:56.788146 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 11:20:56 crc kubenswrapper[4811]: I0219 11:20:56.788205 4811 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" Feb 19 11:20:56 crc kubenswrapper[4811]: I0219 11:20:56.789394 4811 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e1b9cc05decc621cd3844b8e41e9ddb80370e9d3ed02ccf21f44a8835a167d87"} pod="openshift-machine-config-operator/machine-config-daemon-v97zm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 11:20:56 crc kubenswrapper[4811]: I0219 11:20:56.789474 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" containerID="cri-o://e1b9cc05decc621cd3844b8e41e9ddb80370e9d3ed02ccf21f44a8835a167d87" gracePeriod=600 Feb 19 11:20:56 crc kubenswrapper[4811]: I0219 11:20:56.867203 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd594f5c-e579-4c78-a29a-6ea266fd98d0-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "fd594f5c-e579-4c78-a29a-6ea266fd98d0" (UID: "fd594f5c-e579-4c78-a29a-6ea266fd98d0"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:20:56 crc kubenswrapper[4811]: I0219 11:20:56.877806 4811 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fd594f5c-e579-4c78-a29a-6ea266fd98d0-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 19 11:20:57 crc kubenswrapper[4811]: I0219 11:20:57.069472 4811 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-fj7sq_must-gather-842pr_fd594f5c-e579-4c78-a29a-6ea266fd98d0/copy/0.log" Feb 19 11:20:57 crc kubenswrapper[4811]: I0219 11:20:57.069977 4811 generic.go:334] "Generic (PLEG): container finished" podID="fd594f5c-e579-4c78-a29a-6ea266fd98d0" containerID="386d30eb6bd51cfe4d24fc3dda6ae2b10bf5d73bb0f4aa1f2d548c2f724bb688" exitCode=143 Feb 19 11:20:57 crc kubenswrapper[4811]: I0219 11:20:57.070076 4811 scope.go:117] "RemoveContainer" containerID="386d30eb6bd51cfe4d24fc3dda6ae2b10bf5d73bb0f4aa1f2d548c2f724bb688" Feb 19 11:20:57 crc kubenswrapper[4811]: I0219 11:20:57.070092 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fj7sq/must-gather-842pr" Feb 19 11:20:57 crc kubenswrapper[4811]: I0219 11:20:57.075441 4811 generic.go:334] "Generic (PLEG): container finished" podID="e2e42c80-bece-4066-abad-05bc661d33f5" containerID="e1b9cc05decc621cd3844b8e41e9ddb80370e9d3ed02ccf21f44a8835a167d87" exitCode=0 Feb 19 11:20:57 crc kubenswrapper[4811]: I0219 11:20:57.075493 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" event={"ID":"e2e42c80-bece-4066-abad-05bc661d33f5","Type":"ContainerDied","Data":"e1b9cc05decc621cd3844b8e41e9ddb80370e9d3ed02ccf21f44a8835a167d87"} Feb 19 11:20:57 crc kubenswrapper[4811]: I0219 11:20:57.103540 4811 scope.go:117] "RemoveContainer" containerID="2efc4e02dd4cf932af3fb38642c88501555323989adb2a9d1dc26a2048d50272" Feb 19 11:20:57 crc kubenswrapper[4811]: I0219 11:20:57.210441 4811 scope.go:117] "RemoveContainer" containerID="386d30eb6bd51cfe4d24fc3dda6ae2b10bf5d73bb0f4aa1f2d548c2f724bb688" Feb 19 11:20:57 crc kubenswrapper[4811]: E0219 11:20:57.211239 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"386d30eb6bd51cfe4d24fc3dda6ae2b10bf5d73bb0f4aa1f2d548c2f724bb688\": container with ID starting with 386d30eb6bd51cfe4d24fc3dda6ae2b10bf5d73bb0f4aa1f2d548c2f724bb688 not found: ID does not exist" containerID="386d30eb6bd51cfe4d24fc3dda6ae2b10bf5d73bb0f4aa1f2d548c2f724bb688" Feb 19 11:20:57 crc kubenswrapper[4811]: I0219 11:20:57.211289 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"386d30eb6bd51cfe4d24fc3dda6ae2b10bf5d73bb0f4aa1f2d548c2f724bb688"} err="failed to get container status \"386d30eb6bd51cfe4d24fc3dda6ae2b10bf5d73bb0f4aa1f2d548c2f724bb688\": rpc error: code = NotFound desc = could not find container \"386d30eb6bd51cfe4d24fc3dda6ae2b10bf5d73bb0f4aa1f2d548c2f724bb688\": container with ID starting with 386d30eb6bd51cfe4d24fc3dda6ae2b10bf5d73bb0f4aa1f2d548c2f724bb688 not found: ID does not exist" Feb 19 11:20:57 crc kubenswrapper[4811]: I0219 11:20:57.211321 4811 scope.go:117] "RemoveContainer" containerID="2efc4e02dd4cf932af3fb38642c88501555323989adb2a9d1dc26a2048d50272" Feb 19 11:20:57 crc kubenswrapper[4811]: E0219 11:20:57.212110 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2efc4e02dd4cf932af3fb38642c88501555323989adb2a9d1dc26a2048d50272\": container with ID starting with 2efc4e02dd4cf932af3fb38642c88501555323989adb2a9d1dc26a2048d50272 not found: ID does not exist" containerID="2efc4e02dd4cf932af3fb38642c88501555323989adb2a9d1dc26a2048d50272" Feb 19 11:20:57 crc kubenswrapper[4811]: I0219 11:20:57.212166 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2efc4e02dd4cf932af3fb38642c88501555323989adb2a9d1dc26a2048d50272"} err="failed to get container status \"2efc4e02dd4cf932af3fb38642c88501555323989adb2a9d1dc26a2048d50272\": rpc error: code = NotFound desc = could not find container \"2efc4e02dd4cf932af3fb38642c88501555323989adb2a9d1dc26a2048d50272\": container with ID starting with 2efc4e02dd4cf932af3fb38642c88501555323989adb2a9d1dc26a2048d50272 not found: ID does not exist" Feb 19 11:20:57 crc kubenswrapper[4811]: I0219 11:20:57.212200 4811 scope.go:117] "RemoveContainer" containerID="133c8091f936fab48e586b7bdce009236e4848e9a33aa81fe2541aee87912496" Feb 19 11:20:58 crc kubenswrapper[4811]: I0219 11:20:58.087377 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" event={"ID":"e2e42c80-bece-4066-abad-05bc661d33f5","Type":"ContainerStarted","Data":"557d7873d20fc02db65d1ed217ea2d8156f2019c49f615530b9cec99e0793e11"} Feb 19 11:20:58 crc kubenswrapper[4811]: I0219 11:20:58.604288 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd594f5c-e579-4c78-a29a-6ea266fd98d0" path="/var/lib/kubelet/pods/fd594f5c-e579-4c78-a29a-6ea266fd98d0/volumes" Feb 19 11:22:38 crc kubenswrapper[4811]: I0219 11:22:38.068065 4811 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zvpw2"] Feb 19 11:22:38 crc kubenswrapper[4811]: E0219 11:22:38.069147 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ecfa1fe-2338-4679-96d8-04438f1809b0" containerName="extract-utilities" Feb 19 11:22:38 crc kubenswrapper[4811]: I0219 11:22:38.069167 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ecfa1fe-2338-4679-96d8-04438f1809b0" containerName="extract-utilities" Feb 19 11:22:38 crc kubenswrapper[4811]: E0219 11:22:38.069186 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e7e033c-0702-43ed-93bc-86a7ef2d6e19" containerName="extract-content" Feb 19 11:22:38 crc kubenswrapper[4811]: I0219 11:22:38.069194 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e7e033c-0702-43ed-93bc-86a7ef2d6e19" containerName="extract-content" Feb 19 11:22:38 crc kubenswrapper[4811]: E0219 11:22:38.069227 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e7e033c-0702-43ed-93bc-86a7ef2d6e19" containerName="registry-server" Feb 19 11:22:38 crc kubenswrapper[4811]: I0219 11:22:38.069235 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e7e033c-0702-43ed-93bc-86a7ef2d6e19" containerName="registry-server" Feb 19 11:22:38 crc kubenswrapper[4811]: E0219 11:22:38.069249 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd594f5c-e579-4c78-a29a-6ea266fd98d0" containerName="gather" Feb 19 11:22:38 crc kubenswrapper[4811]: I0219 11:22:38.069257 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd594f5c-e579-4c78-a29a-6ea266fd98d0" containerName="gather" Feb 19 11:22:38 crc kubenswrapper[4811]: E0219 11:22:38.069272 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ecfa1fe-2338-4679-96d8-04438f1809b0" containerName="registry-server" Feb 19 11:22:38 crc kubenswrapper[4811]: I0219 11:22:38.069279 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ecfa1fe-2338-4679-96d8-04438f1809b0" containerName="registry-server" Feb 19 11:22:38 crc kubenswrapper[4811]: E0219 11:22:38.069290 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd594f5c-e579-4c78-a29a-6ea266fd98d0" containerName="copy" Feb 19 11:22:38 crc kubenswrapper[4811]: I0219 11:22:38.069297 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd594f5c-e579-4c78-a29a-6ea266fd98d0" containerName="copy" Feb 19 11:22:38 crc kubenswrapper[4811]: E0219 11:22:38.069311 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ecfa1fe-2338-4679-96d8-04438f1809b0" containerName="extract-content" Feb 19 11:22:38 crc kubenswrapper[4811]: I0219 11:22:38.069319 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ecfa1fe-2338-4679-96d8-04438f1809b0" containerName="extract-content" Feb 19 11:22:38 crc kubenswrapper[4811]: E0219 11:22:38.069340 4811 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e7e033c-0702-43ed-93bc-86a7ef2d6e19" containerName="extract-utilities" Feb 19 11:22:38 crc kubenswrapper[4811]: I0219 11:22:38.069367 4811 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e7e033c-0702-43ed-93bc-86a7ef2d6e19" containerName="extract-utilities" Feb 19 11:22:38 crc kubenswrapper[4811]: I0219 11:22:38.069635 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e7e033c-0702-43ed-93bc-86a7ef2d6e19" containerName="registry-server" Feb 19 11:22:38 crc kubenswrapper[4811]: I0219 11:22:38.069655 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ecfa1fe-2338-4679-96d8-04438f1809b0" containerName="registry-server" Feb 19 11:22:38 crc kubenswrapper[4811]: I0219 11:22:38.069673 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd594f5c-e579-4c78-a29a-6ea266fd98d0" containerName="copy" Feb 19 11:22:38 crc kubenswrapper[4811]: I0219 11:22:38.069684 4811 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd594f5c-e579-4c78-a29a-6ea266fd98d0" containerName="gather" Feb 19 11:22:38 crc kubenswrapper[4811]: I0219 11:22:38.074881 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zvpw2" Feb 19 11:22:38 crc kubenswrapper[4811]: I0219 11:22:38.093264 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zvpw2"] Feb 19 11:22:38 crc kubenswrapper[4811]: I0219 11:22:38.180891 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpffw\" (UniqueName: \"kubernetes.io/projected/32f9ccb7-b382-456d-aead-fa46e4105e36-kube-api-access-vpffw\") pod \"redhat-marketplace-zvpw2\" (UID: \"32f9ccb7-b382-456d-aead-fa46e4105e36\") " pod="openshift-marketplace/redhat-marketplace-zvpw2" Feb 19 11:22:38 crc kubenswrapper[4811]: I0219 11:22:38.181508 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32f9ccb7-b382-456d-aead-fa46e4105e36-catalog-content\") pod \"redhat-marketplace-zvpw2\" (UID: \"32f9ccb7-b382-456d-aead-fa46e4105e36\") " pod="openshift-marketplace/redhat-marketplace-zvpw2" Feb 19 11:22:38 crc kubenswrapper[4811]: I0219 11:22:38.181747 4811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32f9ccb7-b382-456d-aead-fa46e4105e36-utilities\") pod \"redhat-marketplace-zvpw2\" (UID: \"32f9ccb7-b382-456d-aead-fa46e4105e36\") " pod="openshift-marketplace/redhat-marketplace-zvpw2" Feb 19 11:22:38 crc kubenswrapper[4811]: I0219 11:22:38.283821 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32f9ccb7-b382-456d-aead-fa46e4105e36-catalog-content\") pod \"redhat-marketplace-zvpw2\" (UID: \"32f9ccb7-b382-456d-aead-fa46e4105e36\") " pod="openshift-marketplace/redhat-marketplace-zvpw2" Feb 19 11:22:38 crc kubenswrapper[4811]: I0219 11:22:38.283947 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32f9ccb7-b382-456d-aead-fa46e4105e36-utilities\") pod \"redhat-marketplace-zvpw2\" (UID: \"32f9ccb7-b382-456d-aead-fa46e4105e36\") " pod="openshift-marketplace/redhat-marketplace-zvpw2" Feb 19 11:22:38 crc kubenswrapper[4811]: I0219 11:22:38.284104 4811 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpffw\" (UniqueName: \"kubernetes.io/projected/32f9ccb7-b382-456d-aead-fa46e4105e36-kube-api-access-vpffw\") pod \"redhat-marketplace-zvpw2\" (UID: \"32f9ccb7-b382-456d-aead-fa46e4105e36\") " pod="openshift-marketplace/redhat-marketplace-zvpw2" Feb 19 11:22:38 crc kubenswrapper[4811]: I0219 11:22:38.284501 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32f9ccb7-b382-456d-aead-fa46e4105e36-utilities\") pod \"redhat-marketplace-zvpw2\" (UID: \"32f9ccb7-b382-456d-aead-fa46e4105e36\") " pod="openshift-marketplace/redhat-marketplace-zvpw2" Feb 19 11:22:38 crc kubenswrapper[4811]: I0219 11:22:38.284501 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32f9ccb7-b382-456d-aead-fa46e4105e36-catalog-content\") pod \"redhat-marketplace-zvpw2\" (UID: \"32f9ccb7-b382-456d-aead-fa46e4105e36\") " pod="openshift-marketplace/redhat-marketplace-zvpw2" Feb 19 11:22:38 crc kubenswrapper[4811]: I0219 11:22:38.312976 4811 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpffw\" (UniqueName: \"kubernetes.io/projected/32f9ccb7-b382-456d-aead-fa46e4105e36-kube-api-access-vpffw\") pod \"redhat-marketplace-zvpw2\" (UID: \"32f9ccb7-b382-456d-aead-fa46e4105e36\") " pod="openshift-marketplace/redhat-marketplace-zvpw2" Feb 19 11:22:38 crc kubenswrapper[4811]: I0219 11:22:38.399330 4811 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zvpw2" Feb 19 11:22:38 crc kubenswrapper[4811]: I0219 11:22:38.929658 4811 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zvpw2"] Feb 19 11:22:40 crc kubenswrapper[4811]: I0219 11:22:40.157208 4811 generic.go:334] "Generic (PLEG): container finished" podID="32f9ccb7-b382-456d-aead-fa46e4105e36" containerID="703491321f6c264b194e0d5c6bbd7b33e2cd3017f1d1eb1c872b57285b4f46a5" exitCode=0 Feb 19 11:22:40 crc kubenswrapper[4811]: I0219 11:22:40.157305 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvpw2" event={"ID":"32f9ccb7-b382-456d-aead-fa46e4105e36","Type":"ContainerDied","Data":"703491321f6c264b194e0d5c6bbd7b33e2cd3017f1d1eb1c872b57285b4f46a5"} Feb 19 11:22:40 crc kubenswrapper[4811]: I0219 11:22:40.158085 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvpw2" event={"ID":"32f9ccb7-b382-456d-aead-fa46e4105e36","Type":"ContainerStarted","Data":"f26e20f46639b522c41d4b10f5e0487f5ed98114f19b6318e7c0d8da0931195d"} Feb 19 11:22:40 crc kubenswrapper[4811]: I0219 11:22:40.161027 4811 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 11:22:41 crc kubenswrapper[4811]: I0219 11:22:41.170627 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvpw2" event={"ID":"32f9ccb7-b382-456d-aead-fa46e4105e36","Type":"ContainerStarted","Data":"0e4eb9fe4fafed1aba449753b283d8faf7448caf92800d060772d69a83d886ec"} Feb 19 11:22:42 crc kubenswrapper[4811]: I0219 11:22:42.188710 4811 generic.go:334] "Generic (PLEG): container finished" podID="32f9ccb7-b382-456d-aead-fa46e4105e36" containerID="0e4eb9fe4fafed1aba449753b283d8faf7448caf92800d060772d69a83d886ec" exitCode=0 Feb 19 11:22:42 crc kubenswrapper[4811]: I0219 11:22:42.188957 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvpw2" event={"ID":"32f9ccb7-b382-456d-aead-fa46e4105e36","Type":"ContainerDied","Data":"0e4eb9fe4fafed1aba449753b283d8faf7448caf92800d060772d69a83d886ec"} Feb 19 11:22:43 crc kubenswrapper[4811]: I0219 11:22:43.203274 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvpw2" event={"ID":"32f9ccb7-b382-456d-aead-fa46e4105e36","Type":"ContainerStarted","Data":"2259684b148f3d1af813b74ceba529c242e53b9f3c74283dc7a939751f9811c5"} Feb 19 11:22:43 crc kubenswrapper[4811]: I0219 11:22:43.226636 4811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zvpw2" podStartSLOduration=2.795034225 podStartE2EDuration="5.226609468s" podCreationTimestamp="2026-02-19 11:22:38 +0000 UTC" firstStartedPulling="2026-02-19 11:22:40.160649204 +0000 UTC m=+4448.687113910" lastFinishedPulling="2026-02-19 11:22:42.592224447 +0000 UTC m=+4451.118689153" observedRunningTime="2026-02-19 11:22:43.225707895 +0000 UTC m=+4451.752172581" watchObservedRunningTime="2026-02-19 11:22:43.226609468 +0000 UTC m=+4451.753074154" Feb 19 11:22:48 crc kubenswrapper[4811]: I0219 11:22:48.399672 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zvpw2" Feb 19 11:22:48 crc kubenswrapper[4811]: I0219 11:22:48.401691 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zvpw2" Feb 19 11:22:48 crc kubenswrapper[4811]: I0219 11:22:48.449502 4811 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zvpw2" Feb 19 11:22:49 crc kubenswrapper[4811]: I0219 11:22:49.320379 4811 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zvpw2" Feb 19 11:22:49 crc kubenswrapper[4811]: I0219 11:22:49.377523 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zvpw2"] Feb 19 11:22:51 crc kubenswrapper[4811]: I0219 11:22:51.291497 4811 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zvpw2" podUID="32f9ccb7-b382-456d-aead-fa46e4105e36" containerName="registry-server" containerID="cri-o://2259684b148f3d1af813b74ceba529c242e53b9f3c74283dc7a939751f9811c5" gracePeriod=2 Feb 19 11:22:52 crc kubenswrapper[4811]: I0219 11:22:52.282640 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zvpw2" Feb 19 11:22:52 crc kubenswrapper[4811]: I0219 11:22:52.310999 4811 generic.go:334] "Generic (PLEG): container finished" podID="32f9ccb7-b382-456d-aead-fa46e4105e36" containerID="2259684b148f3d1af813b74ceba529c242e53b9f3c74283dc7a939751f9811c5" exitCode=0 Feb 19 11:22:52 crc kubenswrapper[4811]: I0219 11:22:52.311076 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvpw2" event={"ID":"32f9ccb7-b382-456d-aead-fa46e4105e36","Type":"ContainerDied","Data":"2259684b148f3d1af813b74ceba529c242e53b9f3c74283dc7a939751f9811c5"} Feb 19 11:22:52 crc kubenswrapper[4811]: I0219 11:22:52.311142 4811 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvpw2" event={"ID":"32f9ccb7-b382-456d-aead-fa46e4105e36","Type":"ContainerDied","Data":"f26e20f46639b522c41d4b10f5e0487f5ed98114f19b6318e7c0d8da0931195d"} Feb 19 11:22:52 crc kubenswrapper[4811]: I0219 11:22:52.311171 4811 scope.go:117] "RemoveContainer" containerID="2259684b148f3d1af813b74ceba529c242e53b9f3c74283dc7a939751f9811c5" Feb 19 11:22:52 crc kubenswrapper[4811]: I0219 11:22:52.311438 4811 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zvpw2" Feb 19 11:22:52 crc kubenswrapper[4811]: I0219 11:22:52.340300 4811 scope.go:117] "RemoveContainer" containerID="0e4eb9fe4fafed1aba449753b283d8faf7448caf92800d060772d69a83d886ec" Feb 19 11:22:52 crc kubenswrapper[4811]: I0219 11:22:52.362816 4811 scope.go:117] "RemoveContainer" containerID="703491321f6c264b194e0d5c6bbd7b33e2cd3017f1d1eb1c872b57285b4f46a5" Feb 19 11:22:52 crc kubenswrapper[4811]: I0219 11:22:52.414396 4811 scope.go:117] "RemoveContainer" containerID="2259684b148f3d1af813b74ceba529c242e53b9f3c74283dc7a939751f9811c5" Feb 19 11:22:52 crc kubenswrapper[4811]: E0219 11:22:52.415505 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2259684b148f3d1af813b74ceba529c242e53b9f3c74283dc7a939751f9811c5\": container with ID starting with 2259684b148f3d1af813b74ceba529c242e53b9f3c74283dc7a939751f9811c5 not found: ID does not exist" containerID="2259684b148f3d1af813b74ceba529c242e53b9f3c74283dc7a939751f9811c5" Feb 19 11:22:52 crc kubenswrapper[4811]: I0219 11:22:52.415618 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2259684b148f3d1af813b74ceba529c242e53b9f3c74283dc7a939751f9811c5"} err="failed to get container status \"2259684b148f3d1af813b74ceba529c242e53b9f3c74283dc7a939751f9811c5\": rpc error: code = NotFound desc = could not find container \"2259684b148f3d1af813b74ceba529c242e53b9f3c74283dc7a939751f9811c5\": container with ID starting with 2259684b148f3d1af813b74ceba529c242e53b9f3c74283dc7a939751f9811c5 not found: ID does not exist" Feb 19 11:22:52 crc kubenswrapper[4811]: I0219 11:22:52.415651 4811 scope.go:117] "RemoveContainer" containerID="0e4eb9fe4fafed1aba449753b283d8faf7448caf92800d060772d69a83d886ec" Feb 19 11:22:52 crc kubenswrapper[4811]: E0219 11:22:52.416170 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e4eb9fe4fafed1aba449753b283d8faf7448caf92800d060772d69a83d886ec\": container with ID starting with 0e4eb9fe4fafed1aba449753b283d8faf7448caf92800d060772d69a83d886ec not found: ID does not exist" containerID="0e4eb9fe4fafed1aba449753b283d8faf7448caf92800d060772d69a83d886ec" Feb 19 11:22:52 crc kubenswrapper[4811]: I0219 11:22:52.416194 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e4eb9fe4fafed1aba449753b283d8faf7448caf92800d060772d69a83d886ec"} err="failed to get container status \"0e4eb9fe4fafed1aba449753b283d8faf7448caf92800d060772d69a83d886ec\": rpc error: code = NotFound desc = could not find container \"0e4eb9fe4fafed1aba449753b283d8faf7448caf92800d060772d69a83d886ec\": container with ID starting with 0e4eb9fe4fafed1aba449753b283d8faf7448caf92800d060772d69a83d886ec not found: ID does not exist" Feb 19 11:22:52 crc kubenswrapper[4811]: I0219 11:22:52.416210 4811 scope.go:117] "RemoveContainer" containerID="703491321f6c264b194e0d5c6bbd7b33e2cd3017f1d1eb1c872b57285b4f46a5" Feb 19 11:22:52 crc kubenswrapper[4811]: E0219 11:22:52.416646 4811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"703491321f6c264b194e0d5c6bbd7b33e2cd3017f1d1eb1c872b57285b4f46a5\": container with ID starting with 703491321f6c264b194e0d5c6bbd7b33e2cd3017f1d1eb1c872b57285b4f46a5 not found: ID does not exist" containerID="703491321f6c264b194e0d5c6bbd7b33e2cd3017f1d1eb1c872b57285b4f46a5" Feb 19 11:22:52 crc kubenswrapper[4811]: I0219 11:22:52.416673 4811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"703491321f6c264b194e0d5c6bbd7b33e2cd3017f1d1eb1c872b57285b4f46a5"} err="failed to get container status \"703491321f6c264b194e0d5c6bbd7b33e2cd3017f1d1eb1c872b57285b4f46a5\": rpc error: code = NotFound desc = could not find container \"703491321f6c264b194e0d5c6bbd7b33e2cd3017f1d1eb1c872b57285b4f46a5\": container with ID starting with 703491321f6c264b194e0d5c6bbd7b33e2cd3017f1d1eb1c872b57285b4f46a5 not found: ID does not exist" Feb 19 11:22:52 crc kubenswrapper[4811]: I0219 11:22:52.422787 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpffw\" (UniqueName: \"kubernetes.io/projected/32f9ccb7-b382-456d-aead-fa46e4105e36-kube-api-access-vpffw\") pod \"32f9ccb7-b382-456d-aead-fa46e4105e36\" (UID: \"32f9ccb7-b382-456d-aead-fa46e4105e36\") " Feb 19 11:22:52 crc kubenswrapper[4811]: I0219 11:22:52.423013 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32f9ccb7-b382-456d-aead-fa46e4105e36-catalog-content\") pod \"32f9ccb7-b382-456d-aead-fa46e4105e36\" (UID: \"32f9ccb7-b382-456d-aead-fa46e4105e36\") " Feb 19 11:22:52 crc kubenswrapper[4811]: I0219 11:22:52.423073 4811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32f9ccb7-b382-456d-aead-fa46e4105e36-utilities\") pod \"32f9ccb7-b382-456d-aead-fa46e4105e36\" (UID: \"32f9ccb7-b382-456d-aead-fa46e4105e36\") " Feb 19 11:22:52 crc kubenswrapper[4811]: I0219 11:22:52.424077 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32f9ccb7-b382-456d-aead-fa46e4105e36-utilities" (OuterVolumeSpecName: "utilities") pod "32f9ccb7-b382-456d-aead-fa46e4105e36" (UID: "32f9ccb7-b382-456d-aead-fa46e4105e36"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:22:52 crc kubenswrapper[4811]: I0219 11:22:52.429884 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32f9ccb7-b382-456d-aead-fa46e4105e36-kube-api-access-vpffw" (OuterVolumeSpecName: "kube-api-access-vpffw") pod "32f9ccb7-b382-456d-aead-fa46e4105e36" (UID: "32f9ccb7-b382-456d-aead-fa46e4105e36"). InnerVolumeSpecName "kube-api-access-vpffw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 11:22:52 crc kubenswrapper[4811]: I0219 11:22:52.458335 4811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32f9ccb7-b382-456d-aead-fa46e4105e36-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "32f9ccb7-b382-456d-aead-fa46e4105e36" (UID: "32f9ccb7-b382-456d-aead-fa46e4105e36"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 11:22:52 crc kubenswrapper[4811]: I0219 11:22:52.526559 4811 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32f9ccb7-b382-456d-aead-fa46e4105e36-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 11:22:52 crc kubenswrapper[4811]: I0219 11:22:52.526616 4811 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32f9ccb7-b382-456d-aead-fa46e4105e36-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 11:22:52 crc kubenswrapper[4811]: I0219 11:22:52.526636 4811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpffw\" (UniqueName: \"kubernetes.io/projected/32f9ccb7-b382-456d-aead-fa46e4105e36-kube-api-access-vpffw\") on node \"crc\" DevicePath \"\"" Feb 19 11:22:52 crc kubenswrapper[4811]: I0219 11:22:52.665025 4811 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zvpw2"] Feb 19 11:22:52 crc kubenswrapper[4811]: I0219 11:22:52.682653 4811 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zvpw2"] Feb 19 11:22:54 crc kubenswrapper[4811]: I0219 11:22:54.596642 4811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32f9ccb7-b382-456d-aead-fa46e4105e36" path="/var/lib/kubelet/pods/32f9ccb7-b382-456d-aead-fa46e4105e36/volumes" Feb 19 11:23:26 crc kubenswrapper[4811]: I0219 11:23:26.788014 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 11:23:26 crc kubenswrapper[4811]: I0219 11:23:26.788767 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 11:23:56 crc kubenswrapper[4811]: I0219 11:23:56.788060 4811 patch_prober.go:28] interesting pod/machine-config-daemon-v97zm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 11:23:56 crc kubenswrapper[4811]: I0219 11:23:56.788646 4811 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v97zm" podUID="e2e42c80-bece-4066-abad-05bc661d33f5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"